Mar 18 14:00:14 crc systemd[1]: Starting Kubernetes Kubelet... Mar 18 14:00:14 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:14 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 18 14:00:15 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 14:00:16 crc kubenswrapper[4857]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.899945 4857 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912900 4857 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912936 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912942 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912946 4857 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912950 4857 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912955 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912958 4857 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912963 4857 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912966 4857 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912971 4857 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912974 4857 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912979 4857 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912982 4857 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912987 4857 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912990 4857 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912994 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.912999 4857 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913004 4857 feature_gate.go:330] unrecognized feature gate: Example Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913009 4857 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913015 4857 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913020 4857 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913036 4857 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913044 4857 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913051 4857 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913056 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913061 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913066 4857 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913071 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913077 4857 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913082 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913087 4857 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913092 4857 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913098 4857 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913103 4857 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913108 4857 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913113 4857 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913119 4857 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913125 4857 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913130 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913135 4857 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913139 4857 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913144 4857 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913148 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913151 4857 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913155 4857 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913158 4857 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913162 4857 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913166 4857 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913169 4857 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913173 4857 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913176 4857 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913180 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913185 4857 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913189 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913193 4857 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913197 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913201 4857 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913204 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913208 4857 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913212 4857 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913215 4857 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913219 4857 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913224 4857 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913230 4857 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913234 4857 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913238 4857 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913242 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913246 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913250 4857 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913255 4857 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.913260 4857 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914125 4857 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914144 4857 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914157 4857 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914163 4857 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914169 4857 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914174 4857 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914181 4857 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914187 4857 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914192 4857 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914196 4857 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914201 4857 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914211 4857 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914216 4857 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914221 4857 flags.go:64] FLAG: --cgroup-root="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914226 4857 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914230 4857 flags.go:64] FLAG: --client-ca-file="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914235 4857 flags.go:64] FLAG: --cloud-config="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914239 4857 flags.go:64] FLAG: --cloud-provider="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914244 4857 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914251 4857 flags.go:64] FLAG: --cluster-domain="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914256 4857 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914260 4857 flags.go:64] FLAG: --config-dir="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914264 4857 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914269 4857 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914275 4857 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914279 4857 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914283 4857 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914288 4857 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914294 4857 flags.go:64] FLAG: --contention-profiling="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914298 4857 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914303 4857 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914307 4857 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914311 4857 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914317 4857 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914321 4857 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914326 4857 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914330 4857 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914334 4857 flags.go:64] FLAG: --enable-server="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914339 4857 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914348 4857 flags.go:64] FLAG: --event-burst="100" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914354 4857 flags.go:64] FLAG: --event-qps="50" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914358 4857 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914362 4857 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914367 4857 flags.go:64] FLAG: --eviction-hard="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914383 4857 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914388 4857 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914393 4857 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914403 4857 flags.go:64] FLAG: --eviction-soft="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914407 4857 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914412 4857 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914416 4857 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914420 4857 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914425 4857 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914454 4857 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914459 4857 flags.go:64] FLAG: --feature-gates="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914471 4857 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914475 4857 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914480 4857 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914484 4857 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914488 4857 flags.go:64] FLAG: --healthz-port="10248" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914493 4857 flags.go:64] FLAG: --help="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914498 4857 flags.go:64] FLAG: --hostname-override="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914502 4857 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914506 4857 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914510 4857 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914515 4857 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914519 4857 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914523 4857 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914527 4857 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914533 4857 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914537 4857 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914542 4857 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914546 4857 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914550 4857 flags.go:64] FLAG: --kube-reserved="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914555 4857 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914559 4857 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914563 4857 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914568 4857 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914572 4857 flags.go:64] FLAG: --lock-file="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914577 4857 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914581 4857 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914585 4857 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914593 4857 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914602 4857 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914607 4857 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914611 4857 flags.go:64] FLAG: --logging-format="text" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914615 4857 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914620 4857 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914624 4857 flags.go:64] FLAG: --manifest-url="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914628 4857 flags.go:64] FLAG: --manifest-url-header="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914634 4857 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914639 4857 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914645 4857 flags.go:64] FLAG: --max-pods="110" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914650 4857 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914660 4857 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914669 4857 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914675 4857 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914680 4857 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914686 4857 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914692 4857 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914706 4857 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914712 4857 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914716 4857 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914721 4857 flags.go:64] FLAG: --pod-cidr="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914726 4857 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914735 4857 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914740 4857 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914745 4857 flags.go:64] FLAG: --pods-per-core="0" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914771 4857 flags.go:64] FLAG: --port="10250" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914777 4857 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914782 4857 flags.go:64] FLAG: --provider-id="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914788 4857 flags.go:64] FLAG: --qos-reserved="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914793 4857 flags.go:64] FLAG: --read-only-port="10255" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914799 4857 flags.go:64] FLAG: --register-node="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914804 4857 flags.go:64] FLAG: --register-schedulable="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914809 4857 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914820 4857 flags.go:64] FLAG: --registry-burst="10" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914824 4857 flags.go:64] FLAG: --registry-qps="5" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914828 4857 flags.go:64] FLAG: --reserved-cpus="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914840 4857 flags.go:64] FLAG: --reserved-memory="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914847 4857 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914851 4857 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914856 4857 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914861 4857 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914866 4857 flags.go:64] FLAG: --runonce="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914872 4857 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914878 4857 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914885 4857 flags.go:64] FLAG: --seccomp-default="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914891 4857 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914896 4857 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914902 4857 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914908 4857 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914913 4857 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914918 4857 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914924 4857 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914929 4857 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914933 4857 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914937 4857 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914942 4857 flags.go:64] FLAG: --system-cgroups="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914946 4857 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914953 4857 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914958 4857 flags.go:64] FLAG: --tls-cert-file="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914963 4857 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914975 4857 flags.go:64] FLAG: --tls-min-version="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914980 4857 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914985 4857 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914990 4857 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.914996 4857 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915001 4857 flags.go:64] FLAG: --v="2" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915009 4857 flags.go:64] FLAG: --version="false" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915017 4857 flags.go:64] FLAG: --vmodule="" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915024 4857 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915029 4857 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915150 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915158 4857 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915171 4857 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915176 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915181 4857 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915185 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915190 4857 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915196 4857 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915200 4857 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915205 4857 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915210 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915214 4857 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915217 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915221 4857 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915225 4857 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915229 4857 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915233 4857 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915237 4857 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915241 4857 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915246 4857 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915250 4857 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915254 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915258 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915262 4857 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915265 4857 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915269 4857 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915273 4857 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915277 4857 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915280 4857 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915284 4857 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915287 4857 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915291 4857 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915295 4857 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915299 4857 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915302 4857 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915306 4857 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915309 4857 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915313 4857 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915322 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915333 4857 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915337 4857 feature_gate.go:330] unrecognized feature gate: Example Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915347 4857 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915351 4857 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915355 4857 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915359 4857 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915363 4857 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915366 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915370 4857 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915374 4857 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915378 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915382 4857 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915385 4857 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915389 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915393 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915397 4857 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915400 4857 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915404 4857 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915408 4857 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915412 4857 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915416 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915419 4857 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915423 4857 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915427 4857 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915430 4857 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915434 4857 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915437 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915441 4857 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915445 4857 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915448 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915452 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.915456 4857 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.915470 4857 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.924961 4857 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.925010 4857 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925107 4857 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925116 4857 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925122 4857 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925130 4857 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925138 4857 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925144 4857 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925150 4857 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925155 4857 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925161 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925167 4857 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925173 4857 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925178 4857 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925183 4857 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925188 4857 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925193 4857 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925198 4857 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925204 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925209 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925214 4857 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925219 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925224 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925230 4857 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925234 4857 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925239 4857 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925244 4857 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925251 4857 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925258 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925263 4857 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925268 4857 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925273 4857 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925278 4857 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925285 4857 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925290 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925295 4857 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925301 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925306 4857 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925312 4857 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925317 4857 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925322 4857 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925327 4857 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925334 4857 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925341 4857 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925347 4857 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925353 4857 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925358 4857 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925363 4857 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925369 4857 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925374 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925379 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925384 4857 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925389 4857 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925394 4857 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925400 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925406 4857 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925413 4857 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925418 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925424 4857 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925429 4857 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925434 4857 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925440 4857 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925445 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925450 4857 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925455 4857 feature_gate.go:330] unrecognized feature gate: Example Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925461 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925466 4857 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925471 4857 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925476 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925482 4857 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925486 4857 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925491 4857 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925503 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.925513 4857 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925665 4857 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925681 4857 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925688 4857 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925695 4857 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925702 4857 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925709 4857 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925715 4857 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925721 4857 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925728 4857 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925734 4857 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925740 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925764 4857 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925770 4857 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925959 4857 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925964 4857 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925970 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925975 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925980 4857 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925985 4857 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925991 4857 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.925996 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926001 4857 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926007 4857 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926011 4857 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926016 4857 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926022 4857 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926026 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926031 4857 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926036 4857 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926041 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926046 4857 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926052 4857 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926057 4857 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926062 4857 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926068 4857 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926073 4857 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926078 4857 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926085 4857 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926091 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926097 4857 feature_gate.go:330] unrecognized feature gate: Example Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926102 4857 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926108 4857 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926113 4857 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926118 4857 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926123 4857 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926129 4857 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926136 4857 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926143 4857 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926148 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926153 4857 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926159 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926164 4857 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926169 4857 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926175 4857 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926180 4857 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926185 4857 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926189 4857 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926195 4857 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926200 4857 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926205 4857 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926210 4857 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926214 4857 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926221 4857 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926228 4857 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926233 4857 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926238 4857 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926245 4857 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926250 4857 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926256 4857 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926262 4857 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 14:00:16 crc kubenswrapper[4857]: W0318 14:00:16.926268 4857 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.926278 4857 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.926539 4857 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 14:00:16 crc kubenswrapper[4857]: E0318 14:00:16.930644 4857 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.934279 4857 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.934454 4857 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.936300 4857 server.go:997] "Starting client certificate rotation" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.936342 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.936730 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.969445 4857 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 14:00:16 crc kubenswrapper[4857]: E0318 14:00:16.973945 4857 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:16 crc kubenswrapper[4857]: I0318 14:00:16.977328 4857 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.001609 4857 log.go:25] "Validated CRI v1 runtime API" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.047214 4857 log.go:25] "Validated CRI v1 image API" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.050177 4857 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.057546 4857 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-18-13-56-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.057608 4857 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.076183 4857 manager.go:217] Machine: {Timestamp:2026-03-18 14:00:17.073791183 +0000 UTC m=+1.202919660 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9936aba9-9b46-46dc-9830-1269a6a97f25 BootID:e6b8b991-9330-4333-ba64-213d0025158e Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:82:dc:af Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:82:dc:af Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:75:ef:71 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:5e:c1:be Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:21:c3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:af:5d:61 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:12:8f:10:a5:62:c4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:c8:b5:ee:e7:38 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.076467 4857 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.076672 4857 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.078178 4857 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.078447 4857 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.078507 4857 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.078815 4857 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.078825 4857 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.079247 4857 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.079288 4857 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.079635 4857 state_mem.go:36] "Initialized new in-memory state store" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.079766 4857 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.084926 4857 kubelet.go:418] "Attempting to sync node with API server" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.084953 4857 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.084992 4857 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.085008 4857 kubelet.go:324] "Adding apiserver pod source" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.085136 4857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.089862 4857 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.090937 4857 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.092108 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.092225 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.092389 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.092532 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.094132 4857 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097202 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097238 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097253 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097267 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097296 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097312 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097330 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097355 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097370 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097381 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097395 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.097407 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.098662 4857 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.099426 4857 server.go:1280] "Started kubelet" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.099666 4857 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 14:00:17 crc systemd[1]: Started Kubernetes Kubelet. Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.102121 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.101873 4857 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.105890 4857 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.109840 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.109907 4857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.110521 4857 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.110576 4857 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.110633 4857 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.110528 4857 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.111901 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.112521 4857 factory.go:55] Registering systemd factory Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.112556 4857 factory.go:221] Registration of the systemd container factory successfully Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.112508 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.113538 4857 factory.go:153] Registering CRI-O factory Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.113573 4857 factory.go:221] Registration of the crio container factory successfully Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.113737 4857 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.113818 4857 factory.go:103] Registering Raw factory Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.113858 4857 manager.go:1196] Started watching for new ooms in manager Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.113989 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="200ms" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.114703 4857 server.go:460] "Adding debug handlers to kubelet server" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.115596 4857 manager.go:319] Starting recovery of all containers Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.113308 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.89:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189df447b6ba9a00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,LastTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.124978 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127399 4857 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127435 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127449 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127460 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127477 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127489 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127501 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127514 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127529 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127542 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127557 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127567 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127581 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127637 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127648 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127659 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127668 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127678 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127689 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127699 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127709 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127721 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127730 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127741 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127766 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127781 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127810 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127825 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127839 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127886 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127900 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127912 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127939 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127952 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127966 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127982 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.127996 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128011 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128024 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128038 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128054 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128068 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128083 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128097 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128112 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128125 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128138 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128151 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128163 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128177 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128191 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128202 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128226 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128239 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128252 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128266 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128282 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128294 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128307 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128320 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128335 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128347 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128361 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128374 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128386 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128400 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128413 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128424 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128436 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128447 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128458 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128470 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128482 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128495 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128506 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128520 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128537 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128551 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128563 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128575 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128589 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128603 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128615 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128628 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128645 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128659 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128672 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128688 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128703 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128716 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128728 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128743 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128779 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128793 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128808 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128821 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128834 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128848 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128862 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128876 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128889 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128903 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128917 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128930 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128950 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128965 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128979 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.128994 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129011 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129024 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129039 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129054 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129067 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129082 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129104 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129153 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129173 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129184 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129195 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129208 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129218 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129228 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129239 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129248 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129262 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129272 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129283 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129293 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129304 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129315 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129326 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129336 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129347 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129358 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129370 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129381 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129722 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129733 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129744 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129774 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129792 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129805 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129819 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129977 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.129990 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130001 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130011 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130021 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130076 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130088 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130099 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130109 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130119 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130128 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130138 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130147 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130158 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130169 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130180 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130193 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130205 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130217 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130266 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130280 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130293 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130305 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130316 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130327 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130337 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130350 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130363 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130377 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130391 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130405 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130417 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130429 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130441 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130453 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130464 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130475 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130487 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130497 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130509 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130519 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130529 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130539 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130550 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130561 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130572 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130582 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130592 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130604 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130614 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130624 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130635 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130646 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130657 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130667 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130677 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130687 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130697 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130708 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130718 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130727 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130740 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130773 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130787 4857 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130799 4857 reconstruct.go:97] "Volume reconstruction finished" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.130809 4857 reconciler.go:26] "Reconciler: start to sync state" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.144342 4857 manager.go:324] Recovery completed Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.158268 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.160082 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.160145 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.160167 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.160347 4857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.161275 4857 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.161313 4857 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.161358 4857 state_mem.go:36] "Initialized new in-memory state store" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.162130 4857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.162258 4857 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.162325 4857 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.162410 4857 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.164207 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.164320 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.183131 4857 policy_none.go:49] "None policy: Start" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.184267 4857 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.184311 4857 state_mem.go:35] "Initializing new in-memory state store" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.211786 4857 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.254292 4857 manager.go:334] "Starting Device Plugin manager" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.254566 4857 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.254585 4857 server.go:79] "Starting device plugin registration server" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.255047 4857 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.255077 4857 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.255680 4857 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.255811 4857 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.255820 4857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.263215 4857 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.263553 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.315366 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="400ms" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.334354 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.334966 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.355577 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.409245 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.409682 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.410121 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.410413 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.411933 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.412514 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.412578 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.412631 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.413155 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.413866 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.414157 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415083 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415104 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415382 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.415543 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415604 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.415673 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417345 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417375 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417382 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417595 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417615 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417623 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417736 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417884 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417927 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.417987 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418208 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418237 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418672 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418694 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418714 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418858 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418873 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.418895 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419130 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419192 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419781 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419825 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419835 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419968 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.419988 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420055 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420078 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420089 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420537 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420563 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.420572 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.436981 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.437074 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.437320 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.437381 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.487361 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.532136 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-199f47595a2c547eb200761c530609ce8e88e933e095f0e7019bb33cbd4667fb WatchSource:0}: Error finding container 199f47595a2c547eb200761c530609ce8e88e933e095f0e7019bb33cbd4667fb: Status 404 returned error can't find the container with id 199f47595a2c547eb200761c530609ce8e88e933e095f0e7019bb33cbd4667fb Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540082 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540179 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540251 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540308 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540355 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540401 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540438 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540479 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540525 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540575 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540626 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540660 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.540703 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.612189 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.613767 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.613849 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.613859 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.613880 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.614342 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.642610 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.642673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.642718 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.642787 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.642802 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643054 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643065 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643063 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643121 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643173 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643200 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643225 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643232 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643249 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643270 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643343 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643311 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643312 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643269 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643402 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643423 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643284 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643442 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643477 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643499 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.643596 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: E0318 14:00:17.716862 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="800ms" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.759075 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.770051 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-10fc2ad521fc2d91aab9cf7a9299568ce442b3937436e85123b08a231dbcf7ab WatchSource:0}: Error finding container 10fc2ad521fc2d91aab9cf7a9299568ce442b3937436e85123b08a231dbcf7ab: Status 404 returned error can't find the container with id 10fc2ad521fc2d91aab9cf7a9299568ce442b3937436e85123b08a231dbcf7ab Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.844897 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.854780 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a50d9c6daf9f4eec7ed49cb8cc8d90f69e1814c5a41f6715468684e1ac9f5727 WatchSource:0}: Error finding container a50d9c6daf9f4eec7ed49cb8cc8d90f69e1814c5a41f6715468684e1ac9f5727: Status 404 returned error can't find the container with id a50d9c6daf9f4eec7ed49cb8cc8d90f69e1814c5a41f6715468684e1ac9f5727 Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.860297 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: I0318 14:00:17.868635 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.875619 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1d9ba5beb287bc7f0ecf28b8b5d6b521ab2f028ea652e7c7c83115d3ae478ab5 WatchSource:0}: Error finding container 1d9ba5beb287bc7f0ecf28b8b5d6b521ab2f028ea652e7c7c83115d3ae478ab5: Status 404 returned error can't find the container with id 1d9ba5beb287bc7f0ecf28b8b5d6b521ab2f028ea652e7c7c83115d3ae478ab5 Mar 18 14:00:17 crc kubenswrapper[4857]: W0318 14:00:17.880633 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e395052f197c8693eb0daa068674038505ea24d1be2d89a4a203dc1c4b90be36 WatchSource:0}: Error finding container e395052f197c8693eb0daa068674038505ea24d1be2d89a4a203dc1c4b90be36: Status 404 returned error can't find the container with id e395052f197c8693eb0daa068674038505ea24d1be2d89a4a203dc1c4b90be36 Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.015308 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.016969 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.017020 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.017031 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.017061 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.017655 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.103640 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.169468 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10fc2ad521fc2d91aab9cf7a9299568ce442b3937436e85123b08a231dbcf7ab"} Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.170707 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"199f47595a2c547eb200761c530609ce8e88e933e095f0e7019bb33cbd4667fb"} Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.172082 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e395052f197c8693eb0daa068674038505ea24d1be2d89a4a203dc1c4b90be36"} Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.173743 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1d9ba5beb287bc7f0ecf28b8b5d6b521ab2f028ea652e7c7c83115d3ae478ab5"} Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.175524 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a50d9c6daf9f4eec7ed49cb8cc8d90f69e1814c5a41f6715468684e1ac9f5727"} Mar 18 14:00:18 crc kubenswrapper[4857]: W0318 14:00:18.256689 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.256833 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:18 crc kubenswrapper[4857]: W0318 14:00:18.277466 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.277617 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.518112 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="1.6s" Mar 18 14:00:18 crc kubenswrapper[4857]: W0318 14:00:18.550262 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.550411 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:18 crc kubenswrapper[4857]: W0318 14:00:18.680517 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.680616 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.818284 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.820374 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.820428 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.820441 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:18 crc kubenswrapper[4857]: I0318 14:00:18.820470 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:18 crc kubenswrapper[4857]: E0318 14:00:18.821221 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.103342 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.126580 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 14:00:19 crc kubenswrapper[4857]: E0318 14:00:19.128117 4857 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.180152 4857 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207" exitCode=0 Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.180211 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.180332 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.181838 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.181896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.181921 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.182591 4857 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5" exitCode=0 Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.182654 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.182706 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.183475 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.183510 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.183521 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.186334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.186373 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.186391 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.188500 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887" exitCode=0 Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.188592 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.188617 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.189472 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.189509 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.189529 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.192532 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.193979 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.194002 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.194013 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.285893 4857 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18" exitCode=0 Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.285946 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18"} Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.286091 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.287365 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.287403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:19 crc kubenswrapper[4857]: I0318 14:00:19.287414 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:19 crc kubenswrapper[4857]: W0318 14:00:19.966532 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:19 crc kubenswrapper[4857]: E0318 14:00:19.966698 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.103063 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:20 crc kubenswrapper[4857]: E0318 14:00:20.131299 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="3.2s" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.314559 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.314564 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.316260 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.316331 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.316377 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.320255 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.320289 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.323781 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.323890 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.326261 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.326297 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.326310 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.328439 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.328475 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.330359 4857 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204" exitCode=0 Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.330397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204"} Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.330465 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.331229 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.331260 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.331273 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.421328 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.424012 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.424097 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.424115 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:20 crc kubenswrapper[4857]: I0318 14:00:20.424162 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:20 crc kubenswrapper[4857]: E0318 14:00:20.425390 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:20 crc kubenswrapper[4857]: W0318 14:00:20.837738 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:20 crc kubenswrapper[4857]: E0318 14:00:20.837830 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:20 crc kubenswrapper[4857]: W0318 14:00:20.848736 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:20 crc kubenswrapper[4857]: E0318 14:00:20.848838 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:21 crc kubenswrapper[4857]: W0318 14:00:21.001567 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:21 crc kubenswrapper[4857]: E0318 14:00:21.001647 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.103434 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.333049 4857 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124" exitCode=0 Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.333104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124"} Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.333211 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.334008 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.334027 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.334035 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.351016 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534"} Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.351351 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.352356 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.352391 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.352411 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.357279 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.358226 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b"} Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.358264 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863"} Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.358357 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.359774 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.359801 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.359827 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.388108 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.388185 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.388212 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:21 crc kubenswrapper[4857]: I0318 14:00:21.565972 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.103676 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.364144 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294"} Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.368550 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.368598 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.368605 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369779 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5c6b422f31ed1f48c08b131d5a0de28501a857d9a13c13f29cb946a221367f2"} Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369905 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369914 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369917 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.369980 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.370002 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.370010 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.370018 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.370054 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.370064 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.766032 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:22 crc kubenswrapper[4857]: I0318 14:00:22.791306 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.103461 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:23 crc kubenswrapper[4857]: E0318 14:00:23.332578 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="6.4s" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.372869 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.375572 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5c6b422f31ed1f48c08b131d5a0de28501a857d9a13c13f29cb946a221367f2" exitCode=255 Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.375690 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e5c6b422f31ed1f48c08b131d5a0de28501a857d9a13c13f29cb946a221367f2"} Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.375806 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.376796 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.376830 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.376840 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.377330 4857 scope.go:117] "RemoveContainer" containerID="e5c6b422f31ed1f48c08b131d5a0de28501a857d9a13c13f29cb946a221367f2" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.379710 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4"} Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.379776 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941"} Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.379792 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146"} Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.379834 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.379905 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381180 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381208 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381217 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381339 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381394 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.381407 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.524082 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 14:00:23 crc kubenswrapper[4857]: E0318 14:00:23.528223 4857 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.631326 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.631410 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.631719 4857 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.631871 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.632811 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.632848 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.632862 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.632894 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:23 crc kubenswrapper[4857]: E0318 14:00:23.633387 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.89:6443: connect: connection refused" node="crc" Mar 18 14:00:23 crc kubenswrapper[4857]: I0318 14:00:23.886576 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.103133 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:24 crc kubenswrapper[4857]: W0318 14:00:24.233198 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:00:24 crc kubenswrapper[4857]: E0318 14:00:24.233309 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.89:6443: connect: connection refused" logger="UnhandledError" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.445176 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.447806 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80"} Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.447915 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.447974 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.448824 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.448892 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.448906 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.452630 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.453999 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.455920 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b"} Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.457565 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.457639 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.457665 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.459223 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.459423 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.459462 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.567070 4857 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:00:24 crc kubenswrapper[4857]: I0318 14:00:24.567164 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.455277 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.455382 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.455435 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.455509 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456259 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456289 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456298 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456660 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456699 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.456712 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.457926 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.457963 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.457976 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:25 crc kubenswrapper[4857]: I0318 14:00:25.490666 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:26 crc kubenswrapper[4857]: I0318 14:00:26.458244 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:26 crc kubenswrapper[4857]: I0318 14:00:26.459403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:26 crc kubenswrapper[4857]: I0318 14:00:26.459448 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:26 crc kubenswrapper[4857]: I0318 14:00:26.459460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:27 crc kubenswrapper[4857]: E0318 14:00:27.416744 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:00:27 crc kubenswrapper[4857]: I0318 14:00:27.460375 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:27 crc kubenswrapper[4857]: I0318 14:00:27.461303 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:27 crc kubenswrapper[4857]: I0318 14:00:27.461336 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:27 crc kubenswrapper[4857]: I0318 14:00:27.461346 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:28 crc kubenswrapper[4857]: I0318 14:00:28.591732 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 18 14:00:28 crc kubenswrapper[4857]: I0318 14:00:28.592008 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:28 crc kubenswrapper[4857]: I0318 14:00:28.593503 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:28 crc kubenswrapper[4857]: I0318 14:00:28.593551 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:28 crc kubenswrapper[4857]: I0318 14:00:28.593561 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.970169 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.970424 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.972425 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.972463 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.972476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:29 crc kubenswrapper[4857]: I0318 14:00:29.975625 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.034517 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.036743 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.036893 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.036918 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.036997 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.468573 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.469772 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.469815 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:30 crc kubenswrapper[4857]: I0318 14:00:30.469831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.239672 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.240037 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.241582 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.241655 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.241682 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.261704 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.291837 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.475031 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.477768 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.477821 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.477834 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:32 crc kubenswrapper[4857]: I0318 14:00:32.498930 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 18 14:00:33 crc kubenswrapper[4857]: I0318 14:00:33.477320 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:33 crc kubenswrapper[4857]: I0318 14:00:33.478276 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:33 crc kubenswrapper[4857]: I0318 14:00:33.478309 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:33 crc kubenswrapper[4857]: I0318 14:00:33.478320 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:34 crc kubenswrapper[4857]: I0318 14:00:34.566995 4857 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:00:34 crc kubenswrapper[4857]: I0318 14:00:34.567146 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.104478 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 18 14:00:35 crc kubenswrapper[4857]: W0318 14:00:35.203603 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.203772 4857 trace.go:236] Trace[1120377846]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Mar-2026 14:00:25.201) (total time: 10002ms): Mar 18 14:00:35 crc kubenswrapper[4857]: Trace[1120377846]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:00:35.203) Mar 18 14:00:35 crc kubenswrapper[4857]: Trace[1120377846]: [10.002091594s] [10.002091594s] END Mar 18 14:00:35 crc kubenswrapper[4857]: E0318 14:00:35.203801 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.485400 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.485991 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.489226 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80"} Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.489453 4857 scope.go:117] "RemoveContainer" containerID="e5c6b422f31ed1f48c08b131d5a0de28501a857d9a13c13f29cb946a221367f2" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.489692 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.489896 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" exitCode=255 Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.490943 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.491021 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.491046 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:35 crc kubenswrapper[4857]: I0318 14:00:35.491939 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:35 crc kubenswrapper[4857]: E0318 14:00:35.492379 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:00:36 crc kubenswrapper[4857]: W0318 14:00:36.060907 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 18 14:00:36 crc kubenswrapper[4857]: I0318 14:00:36.061019 4857 trace.go:236] Trace[1066666609]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Mar-2026 14:00:26.059) (total time: 10001ms): Mar 18 14:00:36 crc kubenswrapper[4857]: Trace[1066666609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:00:36.060) Mar 18 14:00:36 crc kubenswrapper[4857]: Trace[1066666609]: [10.001315693s] [10.001315693s] END Mar 18 14:00:36 crc kubenswrapper[4857]: E0318 14:00:36.061048 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 18 14:00:36 crc kubenswrapper[4857]: W0318 14:00:36.415390 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 18 14:00:36 crc kubenswrapper[4857]: I0318 14:00:36.415531 4857 trace.go:236] Trace[1047918859]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Mar-2026 14:00:26.413) (total time: 10001ms): Mar 18 14:00:36 crc kubenswrapper[4857]: Trace[1047918859]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:00:36.415) Mar 18 14:00:36 crc kubenswrapper[4857]: Trace[1047918859]: [10.001645551s] [10.001645551s] END Mar 18 14:00:36 crc kubenswrapper[4857]: E0318 14:00:36.415562 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 18 14:00:36 crc kubenswrapper[4857]: I0318 14:00:36.495926 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 18 14:00:36 crc kubenswrapper[4857]: E0318 14:00:36.572470 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.189df447b6ba9a00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,LastTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:37 crc kubenswrapper[4857]: E0318 14:00:37.493511 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:00:37 crc kubenswrapper[4857]: E0318 14:00:37.939279 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z" node="crc" Mar 18 14:00:37 crc kubenswrapper[4857]: W0318 14:00:37.939722 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z Mar 18 14:00:37 crc kubenswrapper[4857]: E0318 14:00:37.939848 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 18 14:00:37 crc kubenswrapper[4857]: E0318 14:00:37.940781 4857 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 18 14:00:37 crc kubenswrapper[4857]: E0318 14:00:37.947080 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 18 14:00:37 crc kubenswrapper[4857]: I0318 14:00:37.947417 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:37Z is after 2026-02-23T05:33:13Z Mar 18 14:00:37 crc kubenswrapper[4857]: I0318 14:00:37.953669 4857 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 18 14:00:37 crc kubenswrapper[4857]: I0318 14:00:37.953776 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 18 14:00:37 crc kubenswrapper[4857]: I0318 14:00:37.958981 4857 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 18 14:00:37 crc kubenswrapper[4857]: I0318 14:00:37.959121 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 18 14:00:38 crc kubenswrapper[4857]: I0318 14:00:38.139184 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:38Z is after 2026-02-23T05:33:13Z Mar 18 14:00:38 crc kubenswrapper[4857]: I0318 14:00:38.657994 4857 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]log ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]etcd ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-filter ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-apiextensions-informers ok Mar 18 14:00:38 crc kubenswrapper[4857]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Mar 18 14:00:38 crc kubenswrapper[4857]: [-]poststarthook/crd-informer-synced failed: reason withheld Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-system-namespaces-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 18 14:00:38 crc kubenswrapper[4857]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 18 14:00:38 crc kubenswrapper[4857]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/bootstrap-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/start-kube-aggregator-informers ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-registration-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-discovery-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]autoregister-completion ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-openapi-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 18 14:00:38 crc kubenswrapper[4857]: livez check failed Mar 18 14:00:38 crc kubenswrapper[4857]: I0318 14:00:38.658101 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:00:39 crc kubenswrapper[4857]: I0318 14:00:39.107376 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:39Z is after 2026-02-23T05:33:13Z Mar 18 14:00:40 crc kubenswrapper[4857]: I0318 14:00:40.108180 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:40Z is after 2026-02-23T05:33:13Z Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.002560 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.003019 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.004390 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.004484 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.004527 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.005652 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:41 crc kubenswrapper[4857]: E0318 14:00:41.006072 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:00:41 crc kubenswrapper[4857]: I0318 14:00:41.109563 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:41Z is after 2026-02-23T05:33:13Z Mar 18 14:00:42 crc kubenswrapper[4857]: I0318 14:00:42.108773 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:42Z is after 2026-02-23T05:33:13Z Mar 18 14:00:42 crc kubenswrapper[4857]: W0318 14:00:42.734362 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:42Z is after 2026-02-23T05:33:13Z Mar 18 14:00:42 crc kubenswrapper[4857]: E0318 14:00:42.734481 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.107306 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:43Z is after 2026-02-23T05:33:13Z Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.610377 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.610624 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.612097 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.612157 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.612169 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.612899 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:43 crc kubenswrapper[4857]: E0318 14:00:43.613133 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.615340 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.662401 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.663441 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.663491 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.663502 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:43 crc kubenswrapper[4857]: I0318 14:00:43.664016 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:43 crc kubenswrapper[4857]: E0318 14:00:43.664170 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.110603 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.567360 4857 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.567461 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.567622 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.567825 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.569098 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.569166 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.569184 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.569787 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.569973 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c" gracePeriod=30 Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.940573 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.941855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.941892 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.941904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:44 crc kubenswrapper[4857]: I0318 14:00:44.941929 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:44 crc kubenswrapper[4857]: E0318 14:00:44.945499 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z" node="crc" Mar 18 14:00:44 crc kubenswrapper[4857]: E0318 14:00:44.951422 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.108742 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:45Z is after 2026-02-23T05:33:13Z Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.671579 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.672547 4857 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c" exitCode=255 Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.672606 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c"} Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.672639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4"} Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.672775 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.674147 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.674210 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:45 crc kubenswrapper[4857]: I0318 14:00:45.674229 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:46 crc kubenswrapper[4857]: I0318 14:00:46.105398 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:46Z is after 2026-02-23T05:33:13Z Mar 18 14:00:46 crc kubenswrapper[4857]: E0318 14:00:46.579103 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:46Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189df447b6ba9a00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,LastTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:47 crc kubenswrapper[4857]: I0318 14:00:47.108696 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:47Z is after 2026-02-23T05:33:13Z Mar 18 14:00:47 crc kubenswrapper[4857]: W0318 14:00:47.434265 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:47Z is after 2026-02-23T05:33:13Z Mar 18 14:00:47 crc kubenswrapper[4857]: E0318 14:00:47.434533 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 18 14:00:47 crc kubenswrapper[4857]: E0318 14:00:47.494381 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:00:47 crc kubenswrapper[4857]: W0318 14:00:47.840833 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:47Z is after 2026-02-23T05:33:13Z Mar 18 14:00:47 crc kubenswrapper[4857]: E0318 14:00:47.840926 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 18 14:00:48 crc kubenswrapper[4857]: I0318 14:00:48.106572 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:48Z is after 2026-02-23T05:33:13Z Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.106540 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:49Z is after 2026-02-23T05:33:13Z Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.970220 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.970435 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.971851 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.971895 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:49 crc kubenswrapper[4857]: I0318 14:00:49.971907 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:50 crc kubenswrapper[4857]: I0318 14:00:50.106100 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:50Z is after 2026-02-23T05:33:13Z Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.108350 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:51Z is after 2026-02-23T05:33:13Z Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.565773 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.565996 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.567302 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.567349 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.567362 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.945906 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.947649 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.947704 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.947722 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:51 crc kubenswrapper[4857]: I0318 14:00:51.947780 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:51 crc kubenswrapper[4857]: E0318 14:00:51.951394 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:51Z is after 2026-02-23T05:33:13Z" node="crc" Mar 18 14:00:51 crc kubenswrapper[4857]: E0318 14:00:51.955897 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:51Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 18 14:00:52 crc kubenswrapper[4857]: I0318 14:00:52.106726 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:52Z is after 2026-02-23T05:33:13Z Mar 18 14:00:53 crc kubenswrapper[4857]: I0318 14:00:53.107290 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:53Z is after 2026-02-23T05:33:13Z Mar 18 14:00:54 crc kubenswrapper[4857]: I0318 14:00:54.107275 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:54Z is after 2026-02-23T05:33:13Z Mar 18 14:00:54 crc kubenswrapper[4857]: I0318 14:00:54.566466 4857 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:00:54 crc kubenswrapper[4857]: I0318 14:00:54.566576 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:00:55 crc kubenswrapper[4857]: I0318 14:00:55.111375 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:00:55 crc kubenswrapper[4857]: I0318 14:00:55.325825 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 14:00:55 crc kubenswrapper[4857]: I0318 14:00:55.344621 4857 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.111031 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.162807 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.164429 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.164465 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.164476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.165037 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.653387 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447b6ba9a00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,LastTimestamp:2026-03-18 14:00:17.099381248 +0000 UTC m=+1.228509705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.659510 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.666377 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.674245 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.682728 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.409636129 +0000 UTC m=+1.538764656,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.689210 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.41008696 +0000 UTC m=+1.539215497,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.697088 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.410282835 +0000 UTC m=+1.539411332,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.703173 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.41256846 +0000 UTC m=+1.541696927,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.706516 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.709107 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8"} Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.709488 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.710848 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.710890 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:56 crc kubenswrapper[4857]: I0318 14:00:56.710904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.713528 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.41258631 +0000 UTC m=+1.541714777,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.719463 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.412639381 +0000 UTC m=+1.541767848,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.723901 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447c978d500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.41383808 +0000 UTC m=+1.542966537,LastTimestamp:2026-03-18 14:00:17.41383808 +0000 UTC m=+1.542966537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.728509 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.41507122 +0000 UTC m=+1.544199717,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.733384 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.41509519 +0000 UTC m=+1.544223677,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.738186 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.415114791 +0000 UTC m=+1.544243278,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.742309 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.417369085 +0000 UTC m=+1.546497542,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.747088 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.417380055 +0000 UTC m=+1.546508512,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.752661 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.417387106 +0000 UTC m=+1.546515563,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: W0318 14:00:56.753298 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.753384 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.762845 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.417603731 +0000 UTC m=+1.546732188,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.768518 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.417621061 +0000 UTC m=+1.546749518,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.772237 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.417628051 +0000 UTC m=+1.546756508,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.777787 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.417914388 +0000 UTC m=+1.547042845,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.781784 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.417934889 +0000 UTC m=+1.547063346,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.785499 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba5a461c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba5a461c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16017718 +0000 UTC m=+1.289305677,LastTimestamp:2026-03-18 14:00:17.41799584 +0000 UTC m=+1.547124297,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.789848 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba597501\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba597501 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.160123649 +0000 UTC m=+1.289252146,LastTimestamp:2026-03-18 14:00:17.418689787 +0000 UTC m=+1.547818244,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.795150 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189df447ba59fb3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189df447ba59fb3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.16015801 +0000 UTC m=+1.289286497,LastTimestamp:2026-03-18 14:00:17.418698937 +0000 UTC m=+1.547827394,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.800708 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df447d12c9ae6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.543060198 +0000 UTC m=+1.672188695,LastTimestamp:2026-03-18 14:00:17.543060198 +0000 UTC m=+1.672188695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.804777 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df447ded757fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.772353531 +0000 UTC m=+1.901481988,LastTimestamp:2026-03-18 14:00:17.772353531 +0000 UTC m=+1.901481988,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.808120 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df447e3e47240 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.857098304 +0000 UTC m=+1.986226761,LastTimestamp:2026-03-18 14:00:17.857098304 +0000 UTC m=+1.986226761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.811649 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df447e5374007 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.879302151 +0000 UTC m=+2.008430608,LastTimestamp:2026-03-18 14:00:17.879302151 +0000 UTC m=+2.008430608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.817294 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df447e5679ed5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:17.882472149 +0000 UTC m=+2.011600606,LastTimestamp:2026-03-18 14:00:17.882472149 +0000 UTC m=+2.011600606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.822997 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df447f9660222 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.217910818 +0000 UTC m=+2.347039285,LastTimestamp:2026-03-18 14:00:18.217910818 +0000 UTC m=+2.347039285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.826857 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df447f9687319 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.218070809 +0000 UTC m=+2.347199266,LastTimestamp:2026-03-18 14:00:18.218070809 +0000 UTC m=+2.347199266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.831607 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df447f968c625 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.218092069 +0000 UTC m=+2.347220526,LastTimestamp:2026-03-18 14:00:18.218092069 +0000 UTC m=+2.347220526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.836635 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df447f972624a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.218721866 +0000 UTC m=+2.347850323,LastTimestamp:2026-03-18 14:00:18.218721866 +0000 UTC m=+2.347850323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.840347 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df447f9728eaa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.218733226 +0000 UTC m=+2.347861683,LastTimestamp:2026-03-18 14:00:18.218733226 +0000 UTC m=+2.347861683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.845563 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df447fa2908a1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.230692001 +0000 UTC m=+2.359820458,LastTimestamp:2026-03-18 14:00:18.230692001 +0000 UTC m=+2.359820458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.849614 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df447fa4a3806 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.232866822 +0000 UTC m=+2.361995279,LastTimestamp:2026-03-18 14:00:18.232866822 +0000 UTC m=+2.361995279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.852917 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df447fa5d98e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.234136805 +0000 UTC m=+2.363265262,LastTimestamp:2026-03-18 14:00:18.234136805 +0000 UTC m=+2.363265262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.857201 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df447fba7c6c0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.255775424 +0000 UTC m=+2.384903881,LastTimestamp:2026-03-18 14:00:18.255775424 +0000 UTC m=+2.384903881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.861273 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df447fbaa5630 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.255943216 +0000 UTC m=+2.385071673,LastTimestamp:2026-03-18 14:00:18.255943216 +0000 UTC m=+2.385071673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.864825 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df447fc075ccb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.262039755 +0000 UTC m=+2.391168212,LastTimestamp:2026-03-18 14:00:18.262039755 +0000 UTC m=+2.391168212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.866740 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4480ed1e0de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.577301726 +0000 UTC m=+2.706430173,LastTimestamp:2026-03-18 14:00:18.577301726 +0000 UTC m=+2.706430173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.869858 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4480f8bc5c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.589484484 +0000 UTC m=+2.718612941,LastTimestamp:2026-03-18 14:00:18.589484484 +0000 UTC m=+2.718612941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.873737 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4480f9dbbd2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.590661586 +0000 UTC m=+2.719790043,LastTimestamp:2026-03-18 14:00:18.590661586 +0000 UTC m=+2.719790043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.882000 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df448251b1d69 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.951200105 +0000 UTC m=+3.080328602,LastTimestamp:2026-03-18 14:00:18.951200105 +0000 UTC m=+3.080328602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.887772 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4482c697b76 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.073776502 +0000 UTC m=+3.202904959,LastTimestamp:2026-03-18 14:00:19.073776502 +0000 UTC m=+3.202904959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.892691 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4482c7fe108 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.075244296 +0000 UTC m=+3.204372753,LastTimestamp:2026-03-18 14:00:19.075244296 +0000 UTC m=+3.204372753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.898504 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df44832fae840 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.183970368 +0000 UTC m=+3.313098865,LastTimestamp:2026-03-18 14:00:19.183970368 +0000 UTC m=+3.313098865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.902507 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df448330bc3e1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.185075169 +0000 UTC m=+3.314203626,LastTimestamp:2026-03-18 14:00:19.185075169 +0000 UTC m=+3.314203626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.907097 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448337a5293 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.192320659 +0000 UTC m=+3.321449116,LastTimestamp:2026-03-18 14:00:19.192320659 +0000 UTC m=+3.321449116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.913604 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4483946ad81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.289599361 +0000 UTC m=+3.418727828,LastTimestamp:2026-03-18 14:00:19.289599361 +0000 UTC m=+3.418727828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.918689 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4483d55746a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.35767665 +0000 UTC m=+3.486805117,LastTimestamp:2026-03-18 14:00:19.35767665 +0000 UTC m=+3.486805117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.923505 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df448468f26b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.512452792 +0000 UTC m=+3.641581249,LastTimestamp:2026-03-18 14:00:19.512452792 +0000 UTC m=+3.641581249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.927495 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df4484a5094c9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.575461065 +0000 UTC m=+3.704589533,LastTimestamp:2026-03-18 14:00:19.575461065 +0000 UTC m=+3.704589533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.931847 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df4484a6c4504 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.577275652 +0000 UTC m=+3.706404119,LastTimestamp:2026-03-18 14:00:19.577275652 +0000 UTC m=+3.706404119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.936619 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df4484b948475 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.596690549 +0000 UTC m=+3.725819006,LastTimestamp:2026-03-18 14:00:19.596690549 +0000 UTC m=+3.725819006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.941074 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df4484ba33646 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.597653574 +0000 UTC m=+3.726782031,LastTimestamp:2026-03-18 14:00:19.597653574 +0000 UTC m=+3.726782031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.944565 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189df4484bfae163 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.603399011 +0000 UTC m=+3.732527468,LastTimestamp:2026-03-18 14:00:19.603399011 +0000 UTC m=+3.732527468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.948127 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4484bfaeb59 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.603401561 +0000 UTC m=+3.732530018,LastTimestamp:2026-03-18 14:00:19.603401561 +0000 UTC m=+3.732530018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.951394 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4484bfd4551 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.603555665 +0000 UTC m=+3.732684122,LastTimestamp:2026-03-18 14:00:19.603555665 +0000 UTC m=+3.732684122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.954849 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df44858e6744b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.820164171 +0000 UTC m=+3.949292628,LastTimestamp:2026-03-18 14:00:19.820164171 +0000 UTC m=+3.949292628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.959686 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df44859172ac5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.823356613 +0000 UTC m=+3.952485080,LastTimestamp:2026-03-18 14:00:19.823356613 +0000 UTC m=+3.952485080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.963404 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448593014bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.824989375 +0000 UTC m=+3.954117832,LastTimestamp:2026-03-18 14:00:19.824989375 +0000 UTC m=+3.954117832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.967955 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df44863346ba8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:19.993045928 +0000 UTC m=+4.122174385,LastTimestamp:2026-03-18 14:00:19.993045928 +0000 UTC m=+4.122174385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.971984 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df44867ca16c1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.069963457 +0000 UTC m=+4.199091914,LastTimestamp:2026-03-18 14:00:20.069963457 +0000 UTC m=+4.199091914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.976435 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df44867eb07be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.072122302 +0000 UTC m=+4.201250760,LastTimestamp:2026-03-18 14:00:20.072122302 +0000 UTC m=+4.201250760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.981114 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df44869b5a6d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.102178512 +0000 UTC m=+4.231306969,LastTimestamp:2026-03-18 14:00:20.102178512 +0000 UTC m=+4.231306969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.984612 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4486c9296e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.150212322 +0000 UTC m=+4.279340779,LastTimestamp:2026-03-18 14:00:20.150212322 +0000 UTC m=+4.279340779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.988636 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4486cac4d79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.151897465 +0000 UTC m=+4.281025922,LastTimestamp:2026-03-18 14:00:20.151897465 +0000 UTC m=+4.281025922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.993381 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df44877a274be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.335801534 +0000 UTC m=+4.464929991,LastTimestamp:2026-03-18 14:00:20.335801534 +0000 UTC m=+4.464929991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:56 crc kubenswrapper[4857]: E0318 14:00:56.998321 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4487d8ebe40 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.435172928 +0000 UTC m=+4.564301385,LastTimestamp:2026-03-18 14:00:20.435172928 +0000 UTC m=+4.564301385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.002053 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df44881952df7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.502703607 +0000 UTC m=+4.631832064,LastTimestamp:2026-03-18 14:00:20.502703607 +0000 UTC m=+4.631832064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.006082 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448882321ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.612669933 +0000 UTC m=+4.741798390,LastTimestamp:2026-03-18 14:00:20.612669933 +0000 UTC m=+4.741798390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.009839 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4488835de11 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.613897745 +0000 UTC m=+4.743026202,LastTimestamp:2026-03-18 14:00:20.613897745 +0000 UTC m=+4.743026202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.014546 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189df4488932ca67 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.630473319 +0000 UTC m=+4.759601776,LastTimestamp:2026-03-18 14:00:20.630473319 +0000 UTC m=+4.759601776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.018036 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4489542a851 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:20.832839761 +0000 UTC m=+4.961968218,LastTimestamp:2026-03-18 14:00:20.832839761 +0000 UTC m=+4.961968218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.021902 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4489f926dc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.00583981 +0000 UTC m=+5.134968267,LastTimestamp:2026-03-18 14:00:21.00583981 +0000 UTC m=+5.134968267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.025625 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448a0ddcba0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.027556256 +0000 UTC m=+5.156684753,LastTimestamp:2026-03-18 14:00:21.027556256 +0000 UTC m=+5.156684753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.029014 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448a1abee32 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.041065522 +0000 UTC m=+5.170193969,LastTimestamp:2026-03-18 14:00:21.041065522 +0000 UTC m=+5.170193969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.032656 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448a1c7185c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.042845788 +0000 UTC m=+5.171974285,LastTimestamp:2026-03-18 14:00:21.042845788 +0000 UTC m=+5.171974285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.036499 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448b3329c12 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.33510453 +0000 UTC m=+5.464232987,LastTimestamp:2026-03-18 14:00:21.33510453 +0000 UTC m=+5.464232987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.041940 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448c6af76f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.662054132 +0000 UTC m=+5.791182589,LastTimestamp:2026-03-18 14:00:21.662054132 +0000 UTC m=+5.791182589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.046064 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448c79d8c71 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.677657201 +0000 UTC m=+5.806785668,LastTimestamp:2026-03-18 14:00:21.677657201 +0000 UTC m=+5.806785668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.049563 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448d47f378b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.893773195 +0000 UTC m=+6.022901652,LastTimestamp:2026-03-18 14:00:21.893773195 +0000 UTC m=+6.022901652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.053985 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448d55b4ef0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.908197104 +0000 UTC m=+6.037325561,LastTimestamp:2026-03-18 14:00:21.908197104 +0000 UTC m=+6.037325561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.057982 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448d56c83db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.909324763 +0000 UTC m=+6.038453230,LastTimestamp:2026-03-18 14:00:21.909324763 +0000 UTC m=+6.038453230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.062359 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448f94b7019 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.511136793 +0000 UTC m=+6.640265280,LastTimestamp:2026-03-18 14:00:22.511136793 +0000 UTC m=+6.640265280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.066423 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448fb5b1a30 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.545717808 +0000 UTC m=+6.674846275,LastTimestamp:2026-03-18 14:00:22.545717808 +0000 UTC m=+6.674846275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.073212 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df448fb741c26 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.54735671 +0000 UTC m=+6.676485167,LastTimestamp:2026-03-18 14:00:22.54735671 +0000 UTC m=+6.676485167,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.076486 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df44912171496 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.927135894 +0000 UTC m=+7.056264371,LastTimestamp:2026-03-18 14:00:22.927135894 +0000 UTC m=+7.056264371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.079434 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df44913cb4dc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.955724226 +0000 UTC m=+7.084852693,LastTimestamp:2026-03-18 14:00:22.955724226 +0000 UTC m=+7.084852693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.082833 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df44913e5d255 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:22.957462101 +0000 UTC m=+7.086590568,LastTimestamp:2026-03-18 14:00:22.957462101 +0000 UTC m=+7.086590568,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.086747 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df449231af53c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.212602684 +0000 UTC m=+7.341731141,LastTimestamp:2026-03-18 14:00:23.212602684 +0000 UTC m=+7.341731141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.089897 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4492428bb46 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.230282566 +0000 UTC m=+7.359411023,LastTimestamp:2026-03-18 14:00:23.230282566 +0000 UTC m=+7.359411023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.093092 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df449243b428f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.231496847 +0000 UTC m=+7.360625304,LastTimestamp:2026-03-18 14:00:23.231496847 +0000 UTC m=+7.360625304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.097680 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189df448a1c7185c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448a1c7185c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.042845788 +0000 UTC m=+5.171974285,LastTimestamp:2026-03-18 14:00:23.378419029 +0000 UTC m=+7.507547476,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.103233 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.104051 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4493463d7a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.502591909 +0000 UTC m=+7.631720366,LastTimestamp:2026-03-18 14:00:23.502591909 +0000 UTC m=+7.631720366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.123797 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189df448c6af76f4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448c6af76f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.662054132 +0000 UTC m=+5.791182589,LastTimestamp:2026-03-18 14:00:23.519547083 +0000 UTC m=+7.648675530,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.128198 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189df4493576bda2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.52060765 +0000 UTC m=+7.649736107,LastTimestamp:2026-03-18 14:00:23.52060765 +0000 UTC m=+7.649736107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.133070 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189df448c79d8c71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df448c79d8c71 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:21.677657201 +0000 UTC m=+5.806785668,LastTimestamp:2026-03-18 14:00:23.531235342 +0000 UTC m=+7.660363799,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.137309 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-apiserver-crc.189df4493c1839b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": dial tcp 192.168.126.11:6443: connect: connection refused Mar 18 14:00:57 crc kubenswrapper[4857]: body: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.631854009 +0000 UTC m=+7.760982486,LastTimestamp:2026-03-18 14:00:23.631854009 +0000 UTC m=+7.760982486,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.141400 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df4493c192604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:23.6319145 +0000 UTC m=+7.761042957,LastTimestamp:2026-03-18 14:00:23.6319145 +0000 UTC m=+7.761042957,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.147842 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-controller-manager-crc.189df44973d77c54 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 18 14:00:57 crc kubenswrapper[4857]: body: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:24.567135316 +0000 UTC m=+8.696263773,LastTimestamp:2026-03-18 14:00:24.567135316 +0000 UTC m=+8.696263773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.152027 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df44973d87408 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:24.567198728 +0000 UTC m=+8.696327185,LastTimestamp:2026-03-18 14:00:24.567198728 +0000 UTC m=+8.696327185,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.157997 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e2e596 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 18 14:00:57 crc kubenswrapper[4857]: body: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567103894 +0000 UTC m=+18.696232381,LastTimestamp:2026-03-18 14:00:34.567103894 +0000 UTC m=+18.696232381,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.161425 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e45a9b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567199387 +0000 UTC m=+18.696327884,LastTimestamp:2026-03-18 14:00:34.567199387 +0000 UTC m=+18.696327884,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.166089 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189df44bff0841f6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:35.492299254 +0000 UTC m=+19.621427721,LastTimestamp:2026-03-18 14:00:35.492299254 +0000 UTC m=+19.621427721,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.169823 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-apiserver-crc.189df44c91bebd78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 18 14:00:57 crc kubenswrapper[4857]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 18 14:00:57 crc kubenswrapper[4857]: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:37.95373196 +0000 UTC m=+22.082860417,LastTimestamp:2026-03-18 14:00:37.95373196 +0000 UTC m=+22.082860417,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.177403 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df44bc7e2e596\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e2e596 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 18 14:00:57 crc kubenswrapper[4857]: body: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567103894 +0000 UTC m=+18.696232381,LastTimestamp:2026-03-18 14:00:44.567439331 +0000 UTC m=+28.696567798,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.182255 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df44bc7e45a9b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e45a9b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567199387 +0000 UTC m=+18.696327884,LastTimestamp:2026-03-18 14:00:44.567572304 +0000 UTC m=+28.696700771,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.187175 4857 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df44e1c1a3159 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:44.569948505 +0000 UTC m=+28.699076982,LastTimestamp:2026-03-18 14:00:44.569948505 +0000 UTC m=+28.699076982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.192260 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df447fa5d98e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df447fa5d98e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.234136805 +0000 UTC m=+2.363265262,LastTimestamp:2026-03-18 14:00:44.689123437 +0000 UTC m=+28.818251914,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.196662 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df4480ed1e0de\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4480ed1e0de openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.577301726 +0000 UTC m=+2.706430173,LastTimestamp:2026-03-18 14:00:44.866989561 +0000 UTC m=+28.996118018,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.200510 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df4480f8bc5c4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df4480f8bc5c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:18.589484484 +0000 UTC m=+2.718612941,LastTimestamp:2026-03-18 14:00:44.877223083 +0000 UTC m=+29.006351560,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.205085 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df44bc7e2e596\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 18 14:00:57 crc kubenswrapper[4857]: &Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e2e596 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 18 14:00:57 crc kubenswrapper[4857]: body: Mar 18 14:00:57 crc kubenswrapper[4857]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567103894 +0000 UTC m=+18.696232381,LastTimestamp:2026-03-18 14:00:54.566550705 +0000 UTC m=+38.695679192,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 18 14:00:57 crc kubenswrapper[4857]: > Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.208832 4857 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189df44bc7e45a9b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189df44bc7e45a9b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:00:34.567199387 +0000 UTC m=+18.696327884,LastTimestamp:2026-03-18 14:00:54.566624957 +0000 UTC m=+38.695753454,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.495182 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.716426 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.717370 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.720601 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" exitCode=255 Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.720675 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8"} Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.720787 4857 scope.go:117] "RemoveContainer" containerID="5bfc7bea4b1fed7adebf19c3f03fd52a3bb7b91362298883a0d41025adcc5d80" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.721002 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.722576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.722641 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.722664 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:57 crc kubenswrapper[4857]: I0318 14:00:57.723603 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:00:57 crc kubenswrapper[4857]: E0318 14:00:57.723966 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.108660 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.726593 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.952595 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.954557 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.954638 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.954664 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:00:58 crc kubenswrapper[4857]: I0318 14:00:58.954816 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:00:58 crc kubenswrapper[4857]: E0318 14:00:58.963425 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 18 14:00:58 crc kubenswrapper[4857]: E0318 14:00:58.963627 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 14:00:59 crc kubenswrapper[4857]: I0318 14:00:59.106013 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:00:59 crc kubenswrapper[4857]: W0318 14:00:59.296285 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 14:00:59 crc kubenswrapper[4857]: E0318 14:00:59.296365 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 14:01:00 crc kubenswrapper[4857]: I0318 14:01:00.110017 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.002042 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.002321 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.003848 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.003896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.003906 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.004508 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:01:01 crc kubenswrapper[4857]: E0318 14:01:01.004677 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:01:01 crc kubenswrapper[4857]: I0318 14:01:01.111567 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.031658 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.031919 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.032995 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.033024 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.033033 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.033608 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:01:02 crc kubenswrapper[4857]: E0318 14:01:02.033808 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:01:02 crc kubenswrapper[4857]: I0318 14:01:02.108702 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:03 crc kubenswrapper[4857]: I0318 14:01:03.108955 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:03 crc kubenswrapper[4857]: W0318 14:01:03.387467 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 14:01:03 crc kubenswrapper[4857]: E0318 14:01:03.387548 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.110376 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.545808 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.546006 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.547337 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.547455 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.547520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.551209 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.747290 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.748533 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.748573 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:04 crc kubenswrapper[4857]: I0318 14:01:04.748585 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.286806 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.964491 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.965964 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.966009 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.966024 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:05 crc kubenswrapper[4857]: I0318 14:01:05.966059 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:01:05 crc kubenswrapper[4857]: E0318 14:01:05.969632 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 18 14:01:05 crc kubenswrapper[4857]: E0318 14:01:05.969798 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 14:01:06 crc kubenswrapper[4857]: I0318 14:01:06.381924 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:06 crc kubenswrapper[4857]: W0318 14:01:06.868734 4857 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:06 crc kubenswrapper[4857]: E0318 14:01:06.868813 4857 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 14:01:07 crc kubenswrapper[4857]: I0318 14:01:07.276694 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:07 crc kubenswrapper[4857]: E0318 14:01:07.495814 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:01:08 crc kubenswrapper[4857]: I0318 14:01:08.110478 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.106501 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.510669 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.510934 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.512155 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.512201 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:09 crc kubenswrapper[4857]: I0318 14:01:09.512215 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:10 crc kubenswrapper[4857]: I0318 14:01:10.107784 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:11 crc kubenswrapper[4857]: I0318 14:01:11.106290 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.106381 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.970233 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.971804 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.971947 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.972048 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:12 crc kubenswrapper[4857]: I0318 14:01:12.972137 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:01:12 crc kubenswrapper[4857]: E0318 14:01:12.975839 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 18 14:01:12 crc kubenswrapper[4857]: E0318 14:01:12.975903 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 14:01:13 crc kubenswrapper[4857]: I0318 14:01:13.108222 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:14 crc kubenswrapper[4857]: I0318 14:01:14.106956 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:15 crc kubenswrapper[4857]: I0318 14:01:15.107372 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.108284 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.162697 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.163885 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.163920 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.163930 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:16 crc kubenswrapper[4857]: I0318 14:01:16.164510 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:01:16 crc kubenswrapper[4857]: E0318 14:01:16.164717 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 18 14:01:17 crc kubenswrapper[4857]: I0318 14:01:17.107878 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:17 crc kubenswrapper[4857]: E0318 14:01:17.496707 4857 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 18 14:01:18 crc kubenswrapper[4857]: I0318 14:01:18.108983 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.106217 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.976065 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.977538 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.977692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.977810 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:19 crc kubenswrapper[4857]: I0318 14:01:19.977931 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:01:19 crc kubenswrapper[4857]: E0318 14:01:19.981388 4857 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 18 14:01:19 crc kubenswrapper[4857]: E0318 14:01:19.981466 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 14:01:20 crc kubenswrapper[4857]: I0318 14:01:20.108480 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:21 crc kubenswrapper[4857]: I0318 14:01:21.108683 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:22 crc kubenswrapper[4857]: I0318 14:01:22.107561 4857 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 14:01:22 crc kubenswrapper[4857]: I0318 14:01:22.741203 4857 csr.go:261] certificate signing request csr-79868 is approved, waiting to be issued Mar 18 14:01:22 crc kubenswrapper[4857]: I0318 14:01:22.750146 4857 csr.go:257] certificate signing request csr-79868 is issued Mar 18 14:01:22 crc kubenswrapper[4857]: I0318 14:01:22.814087 4857 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 14:01:22 crc kubenswrapper[4857]: I0318 14:01:22.936492 4857 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 14:01:23 crc kubenswrapper[4857]: I0318 14:01:23.751915 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 14:59:15.014095655 +0000 UTC Mar 18 14:01:23 crc kubenswrapper[4857]: I0318 14:01:23.751995 4857 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6048h57m51.262107002s for next certificate rotation Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.411712 4857 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.678173 4857 apiserver.go:52] "Watching apiserver" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.686004 4857 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.686510 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-image-registry/node-ca-rp52k","openshift-machine-config-operator/machine-config-daemon-sjqg6","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-bpx9l","openshift-dns/node-resolver-dw9w7","openshift-multus/multus-additional-cni-plugins-mr7s9","openshift-multus/multus-bdlm5"] Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.687005 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.687047 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.687166 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.687568 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.687674 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.687707 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.687783 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.688184 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.688260 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.688553 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.689068 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.689677 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.689691 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.690317 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.690485 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694034 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694053 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694092 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694033 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694279 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694559 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694598 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694831 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694929 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.694950 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695008 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695271 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695383 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695405 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695387 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.695601 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696265 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696283 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696811 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696954 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696986 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697072 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697181 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697188 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.696987 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697356 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697427 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697493 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697700 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.697970 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.698007 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.698163 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.698409 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.702354 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.708276 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.720644 4857 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.727325 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740458 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740502 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740533 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740558 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740582 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740605 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740662 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740688 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740707 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740729 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740779 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740806 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740829 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740896 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740924 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740948 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.740972 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741001 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741027 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741051 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741085 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741107 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741130 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741151 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741172 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741201 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741225 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741249 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741275 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741297 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741326 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741365 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741398 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741424 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741451 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741475 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741498 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741523 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741626 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741653 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741678 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741699 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741853 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741893 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741919 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741951 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.741976 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742006 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742031 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742021 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742057 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742094 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742121 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742146 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742172 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742196 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742223 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742254 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742278 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742307 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742332 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742356 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742379 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742405 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742428 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742453 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742474 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742503 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742527 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742558 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742091 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742079 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742324 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742519 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742995 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743015 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743141 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743374 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743416 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743543 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.743643 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.744023 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.744318 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.744333 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.744565 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.745335 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.745386 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.745708 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.745823 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746057 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746350 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746484 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746696 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746883 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.746903 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.747017 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.747402 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.747999 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748124 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748486 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.748531 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:27.248504624 +0000 UTC m=+71.377633081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748833 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748894 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748878 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.742587 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749187 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749236 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749264 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749232 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749298 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749333 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749390 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749440 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749466 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749526 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749555 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749559 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749579 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749621 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749625 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749662 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749683 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749727 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749726 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.748092 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749924 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749959 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.749988 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750011 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750021 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750075 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750139 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750172 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750204 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750226 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750244 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750260 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750278 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750295 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750313 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750332 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750349 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750365 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750383 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750401 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750417 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750435 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750451 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750468 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750484 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750501 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750520 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750538 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750554 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750572 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750586 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750602 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750618 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750632 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750648 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750663 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750678 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750695 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750712 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750726 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750743 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750791 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750814 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750832 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750848 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750865 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750886 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750903 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750920 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750936 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750952 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750967 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750983 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.750999 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751015 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751030 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751044 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751060 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751079 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751095 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751111 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751127 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751146 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751161 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751176 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751194 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751211 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751228 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751244 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751260 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751276 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751291 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751307 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751322 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751352 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751368 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751383 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751399 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751416 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751431 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751448 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751465 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751482 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751498 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751515 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751531 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751547 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751562 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751578 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751594 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751613 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751630 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751646 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751663 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751679 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751696 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751713 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751731 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751774 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751799 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751839 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751859 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751878 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751898 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751920 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751940 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751961 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751979 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.751996 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752012 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752044 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752061 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752118 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752138 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752155 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8nhj\" (UniqueName: \"kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752174 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k54kd\" (UniqueName: \"kubernetes.io/projected/0ca53fe8-513c-4226-8659-208b304ffb78-kube-api-access-k54kd\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752192 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-system-cni-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752232 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752257 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752273 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aeb3da01-2d25-4561-9674-063dd5bb41a4-serviceca\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752288 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752306 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752321 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-system-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752339 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752357 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752380 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752397 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-multus\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752414 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-multus-certs\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752429 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752443 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-cni-binary-copy\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752461 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-netns\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752476 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-socket-dir-parent\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752518 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752535 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752553 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752570 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-bin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752592 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752607 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752622 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752638 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-os-release\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752653 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeb3da01-2d25-4561-9674-063dd5bb41a4-host\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752675 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b115eb6c-2a12-4d60-b269-911a639d8eb1-proxy-tls\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752690 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752706 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752722 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752738 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752779 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqh4\" (UniqueName: \"kubernetes.io/projected/d4bb5036-d0de-4152-af7f-1ef602441c3c-kube-api-access-7rqh4\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752802 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752819 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752833 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-hostroot\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752847 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d4bb5036-d0de-4152-af7f-1ef602441c3c-hosts-file\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752868 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752884 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-cnibin\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752899 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752919 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-multus-daemon-config\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752958 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.752984 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfkph\" (UniqueName: \"kubernetes.io/projected/aeb3da01-2d25-4561-9674-063dd5bb41a4-kube-api-access-rfkph\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753006 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-etc-kubernetes\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753032 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753055 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753079 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753140 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-k8s-cni-cncf-io\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753169 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753192 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b115eb6c-2a12-4d60-b269-911a639d8eb1-rootfs\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753215 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753237 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-os-release\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753265 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9sh\" (UniqueName: \"kubernetes.io/projected/d9391c2e-3dc6-4162-8148-71972b9c14d3-kube-api-access-kb9sh\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753310 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753329 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753347 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-cnibin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753364 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-kubelet\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753383 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753401 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b115eb6c-2a12-4d60-b269-911a639d8eb1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753417 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mxw\" (UniqueName: \"kubernetes.io/projected/b115eb6c-2a12-4d60-b269-911a639d8eb1-kube-api-access-x6mxw\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753434 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753450 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-conf-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753476 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753539 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753551 4857 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753562 4857 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753573 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753583 4857 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753593 4857 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753603 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753615 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753625 4857 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753635 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753645 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753655 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753665 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753674 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753684 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753694 4857 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753704 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753715 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753727 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753737 4857 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753779 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753790 4857 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753801 4857 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753811 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753822 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753832 4857 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753842 4857 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753852 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753863 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753873 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753883 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753894 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753904 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753941 4857 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753954 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753966 4857 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753978 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.753989 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.754000 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.754011 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.754022 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.754216 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.754230 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.755318 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.755405 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:27.255380618 +0000 UTC m=+71.384509085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.756788 4857 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.757555 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.758929 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.759836 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.760284 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.772952 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.773624 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.773853 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.773891 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.774023 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.774218 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.774021 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.774363 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.774388 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775037 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775091 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775135 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775284 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775311 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775580 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775700 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775845 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775896 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.775924 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.776235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.776741 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.776887 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.777127 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.777150 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778096 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778219 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778291 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778451 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778464 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.778887 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.779229 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.779480 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.779913 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780138 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780780 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780151 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780170 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.780224 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.784847 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.784882 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.784959 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:27.284934858 +0000 UTC m=+71.414063315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.785625 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.786059 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780303 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.780489 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.781469 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.782274 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.786152 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.786164 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.786196 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:27.28618818 +0000 UTC m=+71.415316637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.782362 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: E0318 14:01:26.786234 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:27.286228141 +0000 UTC m=+71.415356598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787219 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787948 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787966 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787689 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787433 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787601 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787610 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787635 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.788081 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.787101 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.788235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.788275 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.788362 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.788565 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.789052 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.789447 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.789615 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.789976 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.790580 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.790735 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.790791 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.790994 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.791005 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.791130 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.791157 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.791247 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.791726 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.792165 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.792305 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.792604 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793037 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793068 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793173 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793224 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793333 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793379 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793489 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.793538 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794145 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794272 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794835 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794575 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794345 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794990 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.794969 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795153 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795283 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795421 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795538 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795742 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.795870 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.796119 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.796127 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.796493 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.796597 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.797354 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.797477 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.797502 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.796507 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.799183 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.799257 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.799262 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.799437 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800186 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800158 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800280 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800364 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800607 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800820 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.800995 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.801999 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.802192 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.803665 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.804138 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.806344 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.807267 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.810728 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.810905 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.811062 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.811236 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.811238 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.813021 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.813072 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.813325 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.813359 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.813839 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.814052 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.814482 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.815126 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.815245 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.818699 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.822719 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.823420 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.823489 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.823917 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824075 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824365 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825076 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824402 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824735 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825116 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824861 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825363 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824860 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.824978 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825491 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825507 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825024 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825044 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.825337 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.827190 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.827322 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.827960 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.828855 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.828900 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.831175 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.839192 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857355 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857409 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857442 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-system-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857466 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aeb3da01-2d25-4561-9674-063dd5bb41a4-serviceca\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857493 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857507 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-multus\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857531 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857546 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-cni-binary-copy\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857560 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-netns\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857575 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-multus-certs\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857591 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857608 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-socket-dir-parent\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857636 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-bin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857653 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-os-release\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857666 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeb3da01-2d25-4561-9674-063dd5bb41a4-host\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857681 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b115eb6c-2a12-4d60-b269-911a639d8eb1-proxy-tls\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857696 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857711 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857725 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857739 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857768 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857787 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857800 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857815 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqh4\" (UniqueName: \"kubernetes.io/projected/d4bb5036-d0de-4152-af7f-1ef602441c3c-kube-api-access-7rqh4\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857838 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-hostroot\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857854 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d4bb5036-d0de-4152-af7f-1ef602441c3c-hosts-file\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857877 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857892 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-multus-daemon-config\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857914 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfkph\" (UniqueName: \"kubernetes.io/projected/aeb3da01-2d25-4561-9674-063dd5bb41a4-kube-api-access-rfkph\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857928 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-cnibin\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857943 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857958 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857974 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.857989 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-k8s-cni-cncf-io\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858003 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-etc-kubernetes\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858026 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b115eb6c-2a12-4d60-b269-911a639d8eb1-rootfs\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858041 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858056 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-os-release\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858071 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858097 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9sh\" (UniqueName: \"kubernetes.io/projected/d9391c2e-3dc6-4162-8148-71972b9c14d3-kube-api-access-kb9sh\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858111 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-cnibin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858125 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-kubelet\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858154 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b115eb6c-2a12-4d60-b269-911a639d8eb1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858169 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mxw\" (UniqueName: \"kubernetes.io/projected/b115eb6c-2a12-4d60-b269-911a639d8eb1-kube-api-access-x6mxw\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858186 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858215 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858233 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858258 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858282 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-conf-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858296 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8nhj\" (UniqueName: \"kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858315 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k54kd\" (UniqueName: \"kubernetes.io/projected/0ca53fe8-513c-4226-8659-208b304ffb78-kube-api-access-k54kd\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858329 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-system-cni-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858347 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858363 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858380 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858426 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858436 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858447 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858455 4857 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858464 4857 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858473 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858482 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858491 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858500 4857 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858508 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858517 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858526 4857 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858547 4857 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858557 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858565 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858584 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858593 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858603 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858611 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858620 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858629 4857 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858637 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858646 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858665 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858674 4857 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858683 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858701 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858710 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858718 4857 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858727 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858735 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858744 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858769 4857 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858778 4857 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858787 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858796 4857 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858804 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858813 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858822 4857 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858831 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858847 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858856 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858865 4857 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858873 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858882 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858891 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858900 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858910 4857 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858919 4857 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858928 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858948 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858957 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858966 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858974 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858983 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.858991 4857 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859000 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859009 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859017 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859026 4857 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859036 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859044 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859052 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859060 4857 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859069 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859078 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859087 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859095 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859103 4857 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859121 4857 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859131 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859139 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859148 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859156 4857 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859164 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859173 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859181 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859189 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859198 4857 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859206 4857 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859215 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859223 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859232 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859243 4857 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859251 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859261 4857 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859269 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859277 4857 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859286 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859296 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859309 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859321 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859331 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859354 4857 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859370 4857 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859380 4857 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859391 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859402 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859412 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859421 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859429 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859437 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859445 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859454 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859462 4857 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859470 4857 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859479 4857 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859488 4857 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859496 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859505 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859513 4857 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859521 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859529 4857 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859537 4857 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859547 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859555 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859569 4857 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859578 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859586 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859595 4857 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859603 4857 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859611 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859621 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859629 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859638 4857 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859647 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859655 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859663 4857 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859671 4857 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859680 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859689 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859697 4857 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859705 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859718 4857 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859726 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859735 4857 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859745 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859769 4857 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859777 4857 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859786 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859794 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859802 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859817 4857 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859825 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859833 4857 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.859842 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.873672 4857 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.873698 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.873717 4857 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.873733 4857 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.864028 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-kubelet\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.864022 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-os-release\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.864152 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-system-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.864889 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b115eb6c-2a12-4d60-b269-911a639d8eb1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865200 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865221 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865240 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865242 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-multus-daemon-config\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865255 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-conf-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865288 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-netns\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865317 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865342 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-multus\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865508 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/aeb3da01-2d25-4561-9674-063dd5bb41a4-serviceca\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865512 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-system-cni-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865530 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865537 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-multus-certs\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865547 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865558 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865630 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-cni-dir\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865653 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-multus-socket-dir-parent\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865894 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-var-lib-cni-bin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865917 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865986 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aeb3da01-2d25-4561-9674-063dd5bb41a4-host\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.865996 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866024 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866038 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866243 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0ca53fe8-513c-4226-8659-208b304ffb78-cni-binary-copy\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866450 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866473 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-host-run-k8s-cni-cncf-io\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866548 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866569 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866599 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b115eb6c-2a12-4d60-b269-911a639d8eb1-rootfs\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866620 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-etc-kubernetes\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866944 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-os-release\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866953 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-cnibin\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.866992 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.867063 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d4bb5036-d0de-4152-af7f-1ef602441c3c-hosts-file\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.867095 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-hostroot\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.867418 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.867442 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.867457 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.873084 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.862376 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.862946 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d9391c2e-3dc6-4162-8148-71972b9c14d3-cni-binary-copy\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.863882 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.864007 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0ca53fe8-513c-4226-8659-208b304ffb78-cnibin\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.883131 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.885955 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d9391c2e-3dc6-4162-8148-71972b9c14d3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.887122 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.887279 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b115eb6c-2a12-4d60-b269-911a639d8eb1-proxy-tls\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.890708 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8nhj\" (UniqueName: \"kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj\") pod \"ovnkube-node-bpx9l\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.892836 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9sh\" (UniqueName: \"kubernetes.io/projected/d9391c2e-3dc6-4162-8148-71972b9c14d3-kube-api-access-kb9sh\") pod \"multus-additional-cni-plugins-mr7s9\" (UID: \"d9391c2e-3dc6-4162-8148-71972b9c14d3\") " pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.896335 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k54kd\" (UniqueName: \"kubernetes.io/projected/0ca53fe8-513c-4226-8659-208b304ffb78-kube-api-access-k54kd\") pod \"multus-bdlm5\" (UID: \"0ca53fe8-513c-4226-8659-208b304ffb78\") " pod="openshift-multus/multus-bdlm5" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.896453 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mxw\" (UniqueName: \"kubernetes.io/projected/b115eb6c-2a12-4d60-b269-911a639d8eb1-kube-api-access-x6mxw\") pod \"machine-config-daemon-sjqg6\" (UID: \"b115eb6c-2a12-4d60-b269-911a639d8eb1\") " pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.898195 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqh4\" (UniqueName: \"kubernetes.io/projected/d4bb5036-d0de-4152-af7f-1ef602441c3c-kube-api-access-7rqh4\") pod \"node-resolver-dw9w7\" (UID: \"d4bb5036-d0de-4152-af7f-1ef602441c3c\") " pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.899883 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.903549 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfkph\" (UniqueName: \"kubernetes.io/projected/aeb3da01-2d25-4561-9674-063dd5bb41a4-kube-api-access-rfkph\") pod \"node-ca-rp52k\" (UID: \"aeb3da01-2d25-4561-9674-063dd5bb41a4\") " pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.917229 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.927610 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.935837 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.945961 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.974966 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.982280 4857 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.983624 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.983669 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.983680 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:26 crc kubenswrapper[4857]: I0318 14:01:26.983792 4857 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.024114 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.024181 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.028549 4857 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.028868 4857 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030396 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030436 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030464 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030492 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.030510 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.037475 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:01:27 crc kubenswrapper[4857]: W0318 14:01:27.045957 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f722ae51291b99666677061f3f73316eca1e065dcfe667aa9717500fa139495c WatchSource:0}: Error finding container f722ae51291b99666677061f3f73316eca1e065dcfe667aa9717500fa139495c: Status 404 returned error can't find the container with id f722ae51291b99666677061f3f73316eca1e065dcfe667aa9717500fa139495c Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.047290 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.047788 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dw9w7" Mar 18 14:01:27 crc kubenswrapper[4857]: W0318 14:01:27.048271 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fe944153108c2dcab27308d2463ff240c3486bfc753c2d183e9d4000ea1d62b5 WatchSource:0}: Error finding container fe944153108c2dcab27308d2463ff240c3486bfc753c2d183e9d4000ea1d62b5: Status 404 returned error can't find the container with id fe944153108c2dcab27308d2463ff240c3486bfc753c2d183e9d4000ea1d62b5 Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.048507 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.049987 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: source /etc/kubernetes/apiserver-url.env Mar 18 14:01:27 crc kubenswrapper[4857]: else Mar 18 14:01:27 crc kubenswrapper[4857]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 18 14:01:27 crc kubenswrapper[4857]: exit 1 Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.050506 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:27 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 18 14:01:27 crc kubenswrapper[4857]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 18 14:01:27 crc kubenswrapper[4857]: ho_enable="--enable-hybrid-overlay" Mar 18 14:01:27 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 18 14:01:27 crc kubenswrapper[4857]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 18 14:01:27 crc kubenswrapper[4857]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-host=127.0.0.1 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-port=9743 \ Mar 18 14:01:27 crc kubenswrapper[4857]: ${ho_enable} \ Mar 18 14:01:27 crc kubenswrapper[4857]: --enable-interconnect \ Mar 18 14:01:27 crc kubenswrapper[4857]: --disable-approver \ Mar 18 14:01:27 crc kubenswrapper[4857]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --wait-for-kubernetes-api=200s \ Mar 18 14:01:27 crc kubenswrapper[4857]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --loglevel="${LOGLEVEL}" Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.051309 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.051638 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.056168 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:27 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --disable-webhook \ Mar 18 14:01:27 crc kubenswrapper[4857]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --loglevel="${LOGLEVEL}" Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.056645 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6mxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.056949 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.056975 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.056983 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.057006 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.057045 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.057699 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.059872 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rp52k" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.060830 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6mxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.062026 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:01:27 crc kubenswrapper[4857]: W0318 14:01:27.065823 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4bb5036_d0de_4152_af7f_1ef602441c3c.slice/crio-c010f4d29bd42eb8b88063ff892fc325efd79d39026ea07fd3a6ca4702119872 WatchSource:0}: Error finding container c010f4d29bd42eb8b88063ff892fc325efd79d39026ea07fd3a6ca4702119872: Status 404 returned error can't find the container with id c010f4d29bd42eb8b88063ff892fc325efd79d39026ea07fd3a6ca4702119872 Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.068172 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:27 crc kubenswrapper[4857]: set -uo pipefail Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 18 14:01:27 crc kubenswrapper[4857]: HOSTS_FILE="/etc/hosts" Mar 18 14:01:27 crc kubenswrapper[4857]: TEMP_FILE="/etc/hosts.tmp" Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Make a temporary file with the old hosts file's attributes. Mar 18 14:01:27 crc kubenswrapper[4857]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 18 14:01:27 crc kubenswrapper[4857]: echo "Failed to preserve hosts file. Exiting." Mar 18 14:01:27 crc kubenswrapper[4857]: exit 1 Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: while true; do Mar 18 14:01:27 crc kubenswrapper[4857]: declare -A svc_ips Mar 18 14:01:27 crc kubenswrapper[4857]: for svc in "${services[@]}"; do Mar 18 14:01:27 crc kubenswrapper[4857]: # Fetch service IP from cluster dns if present. We make several tries Mar 18 14:01:27 crc kubenswrapper[4857]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 18 14:01:27 crc kubenswrapper[4857]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 18 14:01:27 crc kubenswrapper[4857]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 18 14:01:27 crc kubenswrapper[4857]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 18 14:01:27 crc kubenswrapper[4857]: for i in ${!cmds[*]} Mar 18 14:01:27 crc kubenswrapper[4857]: do Mar 18 14:01:27 crc kubenswrapper[4857]: ips=($(eval "${cmds[i]}")) Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: svc_ips["${svc}"]="${ips[@]}" Mar 18 14:01:27 crc kubenswrapper[4857]: break Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Update /etc/hosts only if we get valid service IPs Mar 18 14:01:27 crc kubenswrapper[4857]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 18 14:01:27 crc kubenswrapper[4857]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 18 14:01:27 crc kubenswrapper[4857]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 18 14:01:27 crc kubenswrapper[4857]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: continue Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Append resolver entries for services Mar 18 14:01:27 crc kubenswrapper[4857]: rc=0 Mar 18 14:01:27 crc kubenswrapper[4857]: for svc in "${!svc_ips[@]}"; do Mar 18 14:01:27 crc kubenswrapper[4857]: for ip in ${svc_ips[${svc}]}; do Mar 18 14:01:27 crc kubenswrapper[4857]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ $rc -ne 0 ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: continue Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 18 14:01:27 crc kubenswrapper[4857]: # Replace /etc/hosts with our modified version if needed Mar 18 14:01:27 crc kubenswrapper[4857]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 18 14:01:27 crc kubenswrapper[4857]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: unset svc_ips Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-dw9w7_openshift-dns(d4bb5036-d0de-4152-af7f-1ef602441c3c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.069409 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-dw9w7" podUID="d4bb5036-d0de-4152-af7f-1ef602441c3c" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.070825 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.076460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.076668 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.076873 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.077152 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 18 14:01:27 crc kubenswrapper[4857]: while [ true ]; Mar 18 14:01:27 crc kubenswrapper[4857]: do Mar 18 14:01:27 crc kubenswrapper[4857]: for f in $(ls /tmp/serviceca); do Mar 18 14:01:27 crc kubenswrapper[4857]: echo $f Mar 18 14:01:27 crc kubenswrapper[4857]: ca_file_path="/tmp/serviceca/${f}" Mar 18 14:01:27 crc kubenswrapper[4857]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 18 14:01:27 crc kubenswrapper[4857]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 18 14:01:27 crc kubenswrapper[4857]: if [ -e "${reg_dir_path}" ]; then Mar 18 14:01:27 crc kubenswrapper[4857]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: else Mar 18 14:01:27 crc kubenswrapper[4857]: mkdir $reg_dir_path Mar 18 14:01:27 crc kubenswrapper[4857]: cp $ca_file_path $reg_dir_path/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: for d in $(ls /etc/docker/certs.d); do Mar 18 14:01:27 crc kubenswrapper[4857]: echo $d Mar 18 14:01:27 crc kubenswrapper[4857]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 18 14:01:27 crc kubenswrapper[4857]: reg_conf_path="/tmp/serviceca/${dp}" Mar 18 14:01:27 crc kubenswrapper[4857]: if [ ! -e "${reg_conf_path}" ]; then Mar 18 14:01:27 crc kubenswrapper[4857]: rm -rf /etc/docker/certs.d/$d Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait ${!} Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfkph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-rp52k_openshift-image-registry(aeb3da01-2d25-4561-9674-063dd5bb41a4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.077165 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.077218 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.078572 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-rp52k" podUID="aeb3da01-2d25-4561-9674-063dd5bb41a4" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.089637 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.098137 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.098179 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.098193 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.098210 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.098221 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.100182 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bdlm5" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.109586 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.113240 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.113344 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.113419 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.113486 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.113545 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.114966 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:27 crc kubenswrapper[4857]: W0318 14:01:27.117732 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca53fe8_513c_4226_8659_208b304ffb78.slice/crio-d343641b38ada362c9271273bfee3d446e800fa160721f4deee701c378a816dd WatchSource:0}: Error finding container d343641b38ada362c9271273bfee3d446e800fa160721f4deee701c378a816dd: Status 404 returned error can't find the container with id d343641b38ada362c9271273bfee3d446e800fa160721f4deee701c378a816dd Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.118018 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.120744 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 18 14:01:27 crc kubenswrapper[4857]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 18 14:01:27 crc kubenswrapper[4857]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k54kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bdlm5_openshift-multus(0ca53fe8-513c-4226-8659-208b304ffb78): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.122181 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bdlm5" podUID="0ca53fe8-513c-4226-8659-208b304ffb78" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.124689 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.125108 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.127606 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.127637 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.127649 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.127668 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.127681 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.128404 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 18 14:01:27 crc kubenswrapper[4857]: apiVersion: v1 Mar 18 14:01:27 crc kubenswrapper[4857]: clusters: Mar 18 14:01:27 crc kubenswrapper[4857]: - cluster: Mar 18 14:01:27 crc kubenswrapper[4857]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: server: https://api-int.crc.testing:6443 Mar 18 14:01:27 crc kubenswrapper[4857]: name: default-cluster Mar 18 14:01:27 crc kubenswrapper[4857]: contexts: Mar 18 14:01:27 crc kubenswrapper[4857]: - context: Mar 18 14:01:27 crc kubenswrapper[4857]: cluster: default-cluster Mar 18 14:01:27 crc kubenswrapper[4857]: namespace: default Mar 18 14:01:27 crc kubenswrapper[4857]: user: default-auth Mar 18 14:01:27 crc kubenswrapper[4857]: name: default-context Mar 18 14:01:27 crc kubenswrapper[4857]: current-context: default-context Mar 18 14:01:27 crc kubenswrapper[4857]: kind: Config Mar 18 14:01:27 crc kubenswrapper[4857]: preferences: {} Mar 18 14:01:27 crc kubenswrapper[4857]: users: Mar 18 14:01:27 crc kubenswrapper[4857]: - name: default-auth Mar 18 14:01:27 crc kubenswrapper[4857]: user: Mar 18 14:01:27 crc kubenswrapper[4857]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 18 14:01:27 crc kubenswrapper[4857]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 18 14:01:27 crc kubenswrapper[4857]: EOF Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8nhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.129495 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.131958 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mr7s9_openshift-multus(d9391c2e-3dc6-4162-8148-71972b9c14d3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.133187 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" podUID="d9391c2e-3dc6-4162-8148-71972b9c14d3" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.168363 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.169164 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.171111 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.172355 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.172721 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.174588 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.175188 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.175823 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.176362 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.176767 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.176990 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.177507 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.178005 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.181200 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.182026 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.183007 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.184394 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.185095 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.186142 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.186564 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.186808 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.187366 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.188674 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.189196 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.189962 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.190880 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.191617 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.192524 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.193486 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.194587 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.195139 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.196122 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.196677 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.197998 4857 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.198138 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.199482 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.201152 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.201815 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.202818 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.205172 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.206051 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.207192 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.208106 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.209300 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.209940 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.210595 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.212149 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.212951 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.213966 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.214432 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.215380 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.216071 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.217333 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.218007 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.219167 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.219803 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.220378 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.221354 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.221859 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.222832 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.230531 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.230949 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.230971 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.230980 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.231001 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.231010 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.243228 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.254911 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.266719 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.277233 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.277354 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.277454 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.277607 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.277681 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:28.277667412 +0000 UTC m=+72.406795869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.277764 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:28.277744914 +0000 UTC m=+72.406873371 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.288193 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.300885 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.317452 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.333049 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.333089 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.333102 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.333122 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.333137 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.378076 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.378138 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.378161 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378291 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378309 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378322 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378346 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378389 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:28.378372135 +0000 UTC m=+72.507500592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378361 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378498 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378518 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378472 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:28.378450067 +0000 UTC m=+72.507578564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.378563 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:28.37855375 +0000 UTC m=+72.507682207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.435953 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.436007 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.436021 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.436040 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.436053 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.537657 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.537688 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.537696 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.537708 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.537717 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.598378 4857 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.639678 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.639705 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.639715 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.639728 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.639736 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.742779 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.742833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.742845 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.742886 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.742897 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.810101 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.811788 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.812157 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.812783 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"dea21f7698e9d823b0e83ac82221c5c18bfad30d1682b85f7638b7a988d09eeb"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.814060 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerStarted","Data":"d343641b38ada362c9271273bfee3d446e800fa160721f4deee701c378a816dd"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.814820 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.815560 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 18 14:01:27 crc kubenswrapper[4857]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 18 14:01:27 crc kubenswrapper[4857]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k54kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-bdlm5_openshift-multus(0ca53fe8-513c-4226-8659-208b304ffb78): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.815997 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.817107 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-bdlm5" podUID="0ca53fe8-513c-4226-8659-208b304ffb78" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.817337 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f722ae51291b99666677061f3f73316eca1e065dcfe667aa9717500fa139495c"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.818572 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: source /etc/kubernetes/apiserver-url.env Mar 18 14:01:27 crc kubenswrapper[4857]: else Mar 18 14:01:27 crc kubenswrapper[4857]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 18 14:01:27 crc kubenswrapper[4857]: exit 1 Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.818785 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe944153108c2dcab27308d2463ff240c3486bfc753c2d183e9d4000ea1d62b5"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.820117 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:27 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 18 14:01:27 crc kubenswrapper[4857]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 18 14:01:27 crc kubenswrapper[4857]: ho_enable="--enable-hybrid-overlay" Mar 18 14:01:27 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 18 14:01:27 crc kubenswrapper[4857]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 18 14:01:27 crc kubenswrapper[4857]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-host=127.0.0.1 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --webhook-port=9743 \ Mar 18 14:01:27 crc kubenswrapper[4857]: ${ho_enable} \ Mar 18 14:01:27 crc kubenswrapper[4857]: --enable-interconnect \ Mar 18 14:01:27 crc kubenswrapper[4857]: --disable-approver \ Mar 18 14:01:27 crc kubenswrapper[4857]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --wait-for-kubernetes-api=200s \ Mar 18 14:01:27 crc kubenswrapper[4857]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --loglevel="${LOGLEVEL}" Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.820564 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"0f95f7d1b3d34e34c98ede14883fff7a0cef047f4bf19eef28c38dce50514240"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.821343 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.821783 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 18 14:01:27 crc kubenswrapper[4857]: apiVersion: v1 Mar 18 14:01:27 crc kubenswrapper[4857]: clusters: Mar 18 14:01:27 crc kubenswrapper[4857]: - cluster: Mar 18 14:01:27 crc kubenswrapper[4857]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: server: https://api-int.crc.testing:6443 Mar 18 14:01:27 crc kubenswrapper[4857]: name: default-cluster Mar 18 14:01:27 crc kubenswrapper[4857]: contexts: Mar 18 14:01:27 crc kubenswrapper[4857]: - context: Mar 18 14:01:27 crc kubenswrapper[4857]: cluster: default-cluster Mar 18 14:01:27 crc kubenswrapper[4857]: namespace: default Mar 18 14:01:27 crc kubenswrapper[4857]: user: default-auth Mar 18 14:01:27 crc kubenswrapper[4857]: name: default-context Mar 18 14:01:27 crc kubenswrapper[4857]: current-context: default-context Mar 18 14:01:27 crc kubenswrapper[4857]: kind: Config Mar 18 14:01:27 crc kubenswrapper[4857]: preferences: {} Mar 18 14:01:27 crc kubenswrapper[4857]: users: Mar 18 14:01:27 crc kubenswrapper[4857]: - name: default-auth Mar 18 14:01:27 crc kubenswrapper[4857]: user: Mar 18 14:01:27 crc kubenswrapper[4857]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 18 14:01:27 crc kubenswrapper[4857]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 18 14:01:27 crc kubenswrapper[4857]: EOF Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8nhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.822123 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:27 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 18 14:01:27 crc kubenswrapper[4857]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 18 14:01:27 crc kubenswrapper[4857]: --disable-webhook \ Mar 18 14:01:27 crc kubenswrapper[4857]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 18 14:01:27 crc kubenswrapper[4857]: --loglevel="${LOGLEVEL}" Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.822229 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerStarted","Data":"ebf33918b3e59f8672098727ddb3927441b4f066c457416a5a92da4d5c86ac2c"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.822940 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.823150 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-mr7s9_openshift-multus(d9391c2e-3dc6-4162-8148-71972b9c14d3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.823211 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.824201 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"d06f19df4a07df3d59d6866697c4af77c6ef9b378a38f6373fbc592a0fe3d131"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.824255 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" podUID="d9391c2e-3dc6-4162-8148-71972b9c14d3" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.825481 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.825710 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6mxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.826369 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rp52k" event={"ID":"aeb3da01-2d25-4561-9674-063dd5bb41a4","Type":"ContainerStarted","Data":"3766cbf3fce9780acf8f18c9d0c36beee1575ca51318fade448aecec5e014135"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.827519 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dw9w7" event={"ID":"d4bb5036-d0de-4152-af7f-1ef602441c3c","Type":"ContainerStarted","Data":"c010f4d29bd42eb8b88063ff892fc325efd79d39026ea07fd3a6ca4702119872"} Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.827688 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6mxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.828181 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 18 14:01:27 crc kubenswrapper[4857]: while [ true ]; Mar 18 14:01:27 crc kubenswrapper[4857]: do Mar 18 14:01:27 crc kubenswrapper[4857]: for f in $(ls /tmp/serviceca); do Mar 18 14:01:27 crc kubenswrapper[4857]: echo $f Mar 18 14:01:27 crc kubenswrapper[4857]: ca_file_path="/tmp/serviceca/${f}" Mar 18 14:01:27 crc kubenswrapper[4857]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 18 14:01:27 crc kubenswrapper[4857]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 18 14:01:27 crc kubenswrapper[4857]: if [ -e "${reg_dir_path}" ]; then Mar 18 14:01:27 crc kubenswrapper[4857]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: else Mar 18 14:01:27 crc kubenswrapper[4857]: mkdir $reg_dir_path Mar 18 14:01:27 crc kubenswrapper[4857]: cp $ca_file_path $reg_dir_path/ca.crt Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: for d in $(ls /etc/docker/certs.d); do Mar 18 14:01:27 crc kubenswrapper[4857]: echo $d Mar 18 14:01:27 crc kubenswrapper[4857]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 18 14:01:27 crc kubenswrapper[4857]: reg_conf_path="/tmp/serviceca/${dp}" Mar 18 14:01:27 crc kubenswrapper[4857]: if [ ! -e "${reg_conf_path}" ]; then Mar 18 14:01:27 crc kubenswrapper[4857]: rm -rf /etc/docker/certs.d/$d Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait ${!} Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfkph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-rp52k_openshift-image-registry(aeb3da01-2d25-4561-9674-063dd5bb41a4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.828830 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.829327 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-rp52k" podUID="aeb3da01-2d25-4561-9674-063dd5bb41a4" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.830341 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:27 crc kubenswrapper[4857]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:27 crc kubenswrapper[4857]: set -uo pipefail Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 18 14:01:27 crc kubenswrapper[4857]: HOSTS_FILE="/etc/hosts" Mar 18 14:01:27 crc kubenswrapper[4857]: TEMP_FILE="/etc/hosts.tmp" Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Make a temporary file with the old hosts file's attributes. Mar 18 14:01:27 crc kubenswrapper[4857]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 18 14:01:27 crc kubenswrapper[4857]: echo "Failed to preserve hosts file. Exiting." Mar 18 14:01:27 crc kubenswrapper[4857]: exit 1 Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: while true; do Mar 18 14:01:27 crc kubenswrapper[4857]: declare -A svc_ips Mar 18 14:01:27 crc kubenswrapper[4857]: for svc in "${services[@]}"; do Mar 18 14:01:27 crc kubenswrapper[4857]: # Fetch service IP from cluster dns if present. We make several tries Mar 18 14:01:27 crc kubenswrapper[4857]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 18 14:01:27 crc kubenswrapper[4857]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 18 14:01:27 crc kubenswrapper[4857]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 18 14:01:27 crc kubenswrapper[4857]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 18 14:01:27 crc kubenswrapper[4857]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 18 14:01:27 crc kubenswrapper[4857]: for i in ${!cmds[*]} Mar 18 14:01:27 crc kubenswrapper[4857]: do Mar 18 14:01:27 crc kubenswrapper[4857]: ips=($(eval "${cmds[i]}")) Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: svc_ips["${svc}"]="${ips[@]}" Mar 18 14:01:27 crc kubenswrapper[4857]: break Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Update /etc/hosts only if we get valid service IPs Mar 18 14:01:27 crc kubenswrapper[4857]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 18 14:01:27 crc kubenswrapper[4857]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 18 14:01:27 crc kubenswrapper[4857]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 18 14:01:27 crc kubenswrapper[4857]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: continue Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # Append resolver entries for services Mar 18 14:01:27 crc kubenswrapper[4857]: rc=0 Mar 18 14:01:27 crc kubenswrapper[4857]: for svc in "${!svc_ips[@]}"; do Mar 18 14:01:27 crc kubenswrapper[4857]: for ip in ${svc_ips[${svc}]}; do Mar 18 14:01:27 crc kubenswrapper[4857]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: if [[ $rc -ne 0 ]]; then Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: continue Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: Mar 18 14:01:27 crc kubenswrapper[4857]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 18 14:01:27 crc kubenswrapper[4857]: # Replace /etc/hosts with our modified version if needed Mar 18 14:01:27 crc kubenswrapper[4857]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 18 14:01:27 crc kubenswrapper[4857]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 18 14:01:27 crc kubenswrapper[4857]: fi Mar 18 14:01:27 crc kubenswrapper[4857]: sleep 60 & wait Mar 18 14:01:27 crc kubenswrapper[4857]: unset svc_ips Mar 18 14:01:27 crc kubenswrapper[4857]: done Mar 18 14:01:27 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-dw9w7_openshift-dns(d4bb5036-d0de-4152-af7f-1ef602441c3c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:27 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:27 crc kubenswrapper[4857]: E0318 14:01:27.831538 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-dw9w7" podUID="d4bb5036-d0de-4152-af7f-1ef602441c3c" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.834422 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.845353 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.845398 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.845408 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.845424 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.845437 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.850057 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.863303 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.877999 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.887101 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.899403 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.908414 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.925986 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.939147 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.948067 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.948091 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.948099 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.948112 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.948122 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:27Z","lastTransitionTime":"2026-03-18T14:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.951009 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.959810 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.971716 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.986191 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:27 crc kubenswrapper[4857]: I0318 14:01:27.997394 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.014397 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.026270 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.038705 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.049863 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.049891 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.049899 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.049911 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.049920 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.050594 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.061811 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.070254 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.079594 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.094538 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.108118 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.120613 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.132216 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.153276 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.153322 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.153334 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.153349 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.153361 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.162544 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.162555 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.162555 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.162651 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.162729 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.162830 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.255058 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.255097 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.255107 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.255123 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.255133 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.288422 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.288561 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.288666 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.288707 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.288695618 +0000 UTC m=+74.417824065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.288877 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.288859603 +0000 UTC m=+74.417988060 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.389165 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.389231 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.389275 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.389471 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.389497 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.389519 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.389599 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.389569996 +0000 UTC m=+74.518698453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390228 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390300 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.390281284 +0000 UTC m=+74.519409741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390376 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390414 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390435 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:28 crc kubenswrapper[4857]: E0318 14:01:28.390470 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.390462399 +0000 UTC m=+74.519590856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.589497 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.589536 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.589545 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.589558 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.589569 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.692743 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.692819 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.692834 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.692855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.692870 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.795302 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.795355 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.795369 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.795387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.795400 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.840166 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw"] Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.840802 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.842474 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.843278 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.856965 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.866849 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.887642 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.895007 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk94h\" (UniqueName: \"kubernetes.io/projected/667fa6db-20a9-4b0f-990e-1a26e6de3207-kube-api-access-sk94h\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.895416 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.895612 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.895993 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.896569 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.897954 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.897993 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.898004 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.898021 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.898034 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:28Z","lastTransitionTime":"2026-03-18T14:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.907077 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.916445 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.925981 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.933900 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.941915 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.953166 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.966672 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.978669 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.987296 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997022 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk94h\" (UniqueName: \"kubernetes.io/projected/667fa6db-20a9-4b0f-990e-1a26e6de3207-kube-api-access-sk94h\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997068 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997084 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997106 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997470 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997788 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:28 crc kubenswrapper[4857]: I0318 14:01:28.997966 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.000692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.000725 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.000733 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.000767 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.000776 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.001711 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/667fa6db-20a9-4b0f-990e-1a26e6de3207-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.016550 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk94h\" (UniqueName: \"kubernetes.io/projected/667fa6db-20a9-4b0f-990e-1a26e6de3207-kube-api-access-sk94h\") pod \"ovnkube-control-plane-749d76644c-wvdxw\" (UID: \"667fa6db-20a9-4b0f-990e-1a26e6de3207\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.103154 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.103564 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.103599 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.103632 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.103655 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.153142 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" Mar 18 14:01:29 crc kubenswrapper[4857]: W0318 14:01:29.168366 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod667fa6db_20a9_4b0f_990e_1a26e6de3207.slice/crio-44527f692534a0b483aa21ec19bf2db5fa5ac6663eaf614caf083a09e978c696 WatchSource:0}: Error finding container 44527f692534a0b483aa21ec19bf2db5fa5ac6663eaf614caf083a09e978c696: Status 404 returned error can't find the container with id 44527f692534a0b483aa21ec19bf2db5fa5ac6663eaf614caf083a09e978c696 Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.170649 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:29 crc kubenswrapper[4857]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:29 crc kubenswrapper[4857]: set -euo pipefail Mar 18 14:01:29 crc kubenswrapper[4857]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 18 14:01:29 crc kubenswrapper[4857]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 18 14:01:29 crc kubenswrapper[4857]: # As the secret mount is optional we must wait for the files to be present. Mar 18 14:01:29 crc kubenswrapper[4857]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 18 14:01:29 crc kubenswrapper[4857]: TS=$(date +%s) Mar 18 14:01:29 crc kubenswrapper[4857]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 18 14:01:29 crc kubenswrapper[4857]: HAS_LOGGED_INFO=0 Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: log_missing_certs(){ Mar 18 14:01:29 crc kubenswrapper[4857]: CUR_TS=$(date +%s) Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 18 14:01:29 crc kubenswrapper[4857]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 18 14:01:29 crc kubenswrapper[4857]: HAS_LOGGED_INFO=1 Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: } Mar 18 14:01:29 crc kubenswrapper[4857]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 18 14:01:29 crc kubenswrapper[4857]: log_missing_certs Mar 18 14:01:29 crc kubenswrapper[4857]: sleep 5 Mar 18 14:01:29 crc kubenswrapper[4857]: done Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 18 14:01:29 crc kubenswrapper[4857]: exec /usr/bin/kube-rbac-proxy \ Mar 18 14:01:29 crc kubenswrapper[4857]: --logtostderr \ Mar 18 14:01:29 crc kubenswrapper[4857]: --secure-listen-address=:9108 \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 18 14:01:29 crc kubenswrapper[4857]: --upstream=http://127.0.0.1:29108/ \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-private-key-file=${TLS_PK} \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-cert-file=${TLS_CERT} Mar 18 14:01:29 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk94h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-wvdxw_openshift-ovn-kubernetes(667fa6db-20a9-4b0f-990e-1a26e6de3207): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:29 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.173788 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:29 crc kubenswrapper[4857]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:29 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:29 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_join_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_join_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_transit_switch_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_transit_switch_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: dns_name_resolver_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "false" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: persistent_ips_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "true" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: # This is needed so that converting clusters from GA to TP Mar 18 14:01:29 crc kubenswrapper[4857]: # will rollout control plane pods as well Mar 18 14:01:29 crc kubenswrapper[4857]: network_segmentation_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: multi_network_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "true" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: multi_network_enabled_flag="--enable-multi-network" Mar 18 14:01:29 crc kubenswrapper[4857]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 18 14:01:29 crc kubenswrapper[4857]: exec /usr/bin/ovnkube \ Mar 18 14:01:29 crc kubenswrapper[4857]: --enable-interconnect \ Mar 18 14:01:29 crc kubenswrapper[4857]: --init-cluster-manager "${K8S_NODE}" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 18 14:01:29 crc kubenswrapper[4857]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-bind-address "127.0.0.1:29108" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-enable-pprof \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-enable-config-duration \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v4_join_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v6_join_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${dns_name_resolver_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${persistent_ips_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${multi_network_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${network_segmentation_enabled_flag} Mar 18 14:01:29 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk94h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-wvdxw_openshift-ovn-kubernetes(667fa6db-20a9-4b0f-990e-1a26e6de3207): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:29 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.175003 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" podUID="667fa6db-20a9-4b0f-990e-1a26e6de3207" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.207264 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.207320 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.207334 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.207353 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.207371 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.310219 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.310261 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.310270 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.310286 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.310298 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.413054 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.413088 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.413098 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.413112 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.413120 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.515833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.515883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.515896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.515912 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.515923 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.541443 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-f7vgs"] Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.542267 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.542370 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.558235 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.568618 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.579436 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.590865 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.601097 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.603457 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8g74\" (UniqueName: \"kubernetes.io/projected/eb942ab9-842d-4078-9789-2fe1788b4dfb-kube-api-access-f8g74\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.603570 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.613203 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.619146 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.619354 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.619604 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.619926 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.620202 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.623544 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.638307 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.650867 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.659620 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.670748 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.682024 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.691988 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.704558 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8g74\" (UniqueName: \"kubernetes.io/projected/eb942ab9-842d-4078-9789-2fe1788b4dfb-kube-api-access-f8g74\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.704684 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.704907 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.704979 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:01:30.204958749 +0000 UTC m=+74.334087246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.709743 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.719187 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.723736 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.723820 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.723839 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.723862 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.723879 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.726643 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8g74\" (UniqueName: \"kubernetes.io/projected/eb942ab9-842d-4078-9789-2fe1788b4dfb-kube-api-access-f8g74\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.827351 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.827397 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.827410 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.827426 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.827441 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.833545 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" event={"ID":"667fa6db-20a9-4b0f-990e-1a26e6de3207","Type":"ContainerStarted","Data":"44527f692534a0b483aa21ec19bf2db5fa5ac6663eaf614caf083a09e978c696"} Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.836026 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:29 crc kubenswrapper[4857]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 18 14:01:29 crc kubenswrapper[4857]: set -euo pipefail Mar 18 14:01:29 crc kubenswrapper[4857]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 18 14:01:29 crc kubenswrapper[4857]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 18 14:01:29 crc kubenswrapper[4857]: # As the secret mount is optional we must wait for the files to be present. Mar 18 14:01:29 crc kubenswrapper[4857]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 18 14:01:29 crc kubenswrapper[4857]: TS=$(date +%s) Mar 18 14:01:29 crc kubenswrapper[4857]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 18 14:01:29 crc kubenswrapper[4857]: HAS_LOGGED_INFO=0 Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: log_missing_certs(){ Mar 18 14:01:29 crc kubenswrapper[4857]: CUR_TS=$(date +%s) Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 18 14:01:29 crc kubenswrapper[4857]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 18 14:01:29 crc kubenswrapper[4857]: HAS_LOGGED_INFO=1 Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: } Mar 18 14:01:29 crc kubenswrapper[4857]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 18 14:01:29 crc kubenswrapper[4857]: log_missing_certs Mar 18 14:01:29 crc kubenswrapper[4857]: sleep 5 Mar 18 14:01:29 crc kubenswrapper[4857]: done Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 18 14:01:29 crc kubenswrapper[4857]: exec /usr/bin/kube-rbac-proxy \ Mar 18 14:01:29 crc kubenswrapper[4857]: --logtostderr \ Mar 18 14:01:29 crc kubenswrapper[4857]: --secure-listen-address=:9108 \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 18 14:01:29 crc kubenswrapper[4857]: --upstream=http://127.0.0.1:29108/ \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-private-key-file=${TLS_PK} \ Mar 18 14:01:29 crc kubenswrapper[4857]: --tls-cert-file=${TLS_CERT} Mar 18 14:01:29 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk94h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-wvdxw_openshift-ovn-kubernetes(667fa6db-20a9-4b0f-990e-1a26e6de3207): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:29 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.838297 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:01:29 crc kubenswrapper[4857]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ -f "/env/_master" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: set -o allexport Mar 18 14:01:29 crc kubenswrapper[4857]: source "/env/_master" Mar 18 14:01:29 crc kubenswrapper[4857]: set +o allexport Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_join_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_join_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_transit_switch_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_transit_switch_subnet_opt= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "" != "" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: dns_name_resolver_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "false" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: persistent_ips_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "true" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: # This is needed so that converting clusters from GA to TP Mar 18 14:01:29 crc kubenswrapper[4857]: # will rollout control plane pods as well Mar 18 14:01:29 crc kubenswrapper[4857]: network_segmentation_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: multi_network_enabled_flag= Mar 18 14:01:29 crc kubenswrapper[4857]: if [[ "true" == "true" ]]; then Mar 18 14:01:29 crc kubenswrapper[4857]: multi_network_enabled_flag="--enable-multi-network" Mar 18 14:01:29 crc kubenswrapper[4857]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 18 14:01:29 crc kubenswrapper[4857]: fi Mar 18 14:01:29 crc kubenswrapper[4857]: Mar 18 14:01:29 crc kubenswrapper[4857]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 18 14:01:29 crc kubenswrapper[4857]: exec /usr/bin/ovnkube \ Mar 18 14:01:29 crc kubenswrapper[4857]: --enable-interconnect \ Mar 18 14:01:29 crc kubenswrapper[4857]: --init-cluster-manager "${K8S_NODE}" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 18 14:01:29 crc kubenswrapper[4857]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-bind-address "127.0.0.1:29108" \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-enable-pprof \ Mar 18 14:01:29 crc kubenswrapper[4857]: --metrics-enable-config-duration \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v4_join_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v6_join_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${dns_name_resolver_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${persistent_ips_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${multi_network_enabled_flag} \ Mar 18 14:01:29 crc kubenswrapper[4857]: ${network_segmentation_enabled_flag} Mar 18 14:01:29 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk94h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-wvdxw_openshift-ovn-kubernetes(667fa6db-20a9-4b0f-990e-1a26e6de3207): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 18 14:01:29 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:01:29 crc kubenswrapper[4857]: E0318 14:01:29.840291 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" podUID="667fa6db-20a9-4b0f-990e-1a26e6de3207" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.847207 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.855226 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.865822 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.875880 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.884395 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.900863 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.910174 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.921648 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929439 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929471 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929480 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929492 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929504 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:29Z","lastTransitionTime":"2026-03-18T14:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.929624 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.938471 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.948268 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.957440 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.969617 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.980285 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:29 crc kubenswrapper[4857]: I0318 14:01:29.995353 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.031522 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.031564 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.031574 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.031594 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.031608 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.134425 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.134484 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.134510 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.134531 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.134544 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.163272 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.163317 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.163272 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.163426 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.163479 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.163648 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.211230 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.211403 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.211514 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:01:31.211496113 +0000 UTC m=+75.340624570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.237622 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.237686 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.237701 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.237721 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.237734 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.312317 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.312593 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:34.312559986 +0000 UTC m=+78.441688483 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.312729 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.312896 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.313031 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:34.312998447 +0000 UTC m=+78.442126934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.340584 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.340642 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.340655 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.340674 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.340687 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.414274 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.414317 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.414511 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414555 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414563 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414611 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414631 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414650 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414672 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:34.414654735 +0000 UTC m=+78.543783192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414581 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414715 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414740 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:34.414717686 +0000 UTC m=+78.543846153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:30 crc kubenswrapper[4857]: E0318 14:01:30.414790 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:34.414775328 +0000 UTC m=+78.543903835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.443543 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.443576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.443585 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.443598 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.443610 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.546026 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.546077 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.546091 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.546109 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.546122 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.648792 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.648851 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.648866 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.648883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.648898 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.751838 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.751881 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.751890 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.751904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.751914 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.853697 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.853767 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.853797 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.853818 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.853834 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.956684 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.956742 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.956782 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.956805 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:30 crc kubenswrapper[4857]: I0318 14:01:30.956822 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:30Z","lastTransitionTime":"2026-03-18T14:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.059075 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.059106 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.059116 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.059130 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.059139 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.161297 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.161337 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.161346 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.161361 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.161370 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.162664 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:31 crc kubenswrapper[4857]: E0318 14:01:31.162910 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.224329 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:31 crc kubenswrapper[4857]: E0318 14:01:31.224489 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:31 crc kubenswrapper[4857]: E0318 14:01:31.224550 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:01:33.224534431 +0000 UTC m=+77.353662878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.263939 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.263987 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.263997 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.264012 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.264022 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.366440 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.366470 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.366479 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.366492 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.366500 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.469835 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.469870 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.469878 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.469891 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.469900 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.572497 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.572530 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.572538 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.572551 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.572561 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.675634 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.675679 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.675703 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.675726 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.675740 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.779109 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.779162 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.779176 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.779195 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.779208 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.881817 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.881857 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.881869 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.881885 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.881894 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.984783 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.984854 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.984870 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.984889 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:31 crc kubenswrapper[4857]: I0318 14:01:31.984901 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:31Z","lastTransitionTime":"2026-03-18T14:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.087387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.087432 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.087445 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.087462 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.087475 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.163134 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.163202 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.163202 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:32 crc kubenswrapper[4857]: E0318 14:01:32.163340 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:32 crc kubenswrapper[4857]: E0318 14:01:32.163517 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:32 crc kubenswrapper[4857]: E0318 14:01:32.163614 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.190769 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.190807 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.190816 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.190831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.190841 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.293239 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.293270 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.293279 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.293293 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.293301 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.396008 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.396064 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.396075 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.396092 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.396104 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.499154 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.499221 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.499238 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.499262 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.499279 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.602299 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.602370 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.602385 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.602405 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.602420 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.706745 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.706869 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.706896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.706932 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.706955 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.810345 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.810443 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.810470 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.810507 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.810533 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.913524 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.913586 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.913597 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.913612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:32 crc kubenswrapper[4857]: I0318 14:01:32.913622 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:32Z","lastTransitionTime":"2026-03-18T14:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.016372 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.016467 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.016480 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.016499 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.016510 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.131336 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.131370 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.131380 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.131395 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.131404 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.233713 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.233776 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.233797 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.233828 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.233850 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.246243 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:33 crc kubenswrapper[4857]: E0318 14:01:33.246470 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:33 crc kubenswrapper[4857]: E0318 14:01:33.246559 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:01:37.246539421 +0000 UTC m=+81.375667878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.316362 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.316392 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:33 crc kubenswrapper[4857]: E0318 14:01:33.316509 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:33 crc kubenswrapper[4857]: E0318 14:01:33.316658 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.337445 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.337503 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.337514 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.337532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.337545 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.439686 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.439724 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.439736 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.439774 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.439786 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.542399 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.542456 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.542472 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.542489 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.542503 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.645182 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.645450 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.645582 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.645693 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.645845 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.748382 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.748446 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.748462 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.748485 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.748502 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.851191 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.851304 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.851340 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.851365 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.851383 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.953823 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.953869 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.953877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.953893 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:33 crc kubenswrapper[4857]: I0318 14:01:33.953903 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:33Z","lastTransitionTime":"2026-03-18T14:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.056094 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.056159 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.056178 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.056202 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.056212 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.158403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.158465 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.158482 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.158510 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.158534 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.162669 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.162721 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.162848 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.163018 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.260726 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.260795 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.260805 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.260819 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.260828 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.357235 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.357380 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.357497 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.357524 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:42.357476931 +0000 UTC m=+86.486605398 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.357598 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:42.357582053 +0000 UTC m=+86.486710510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.362705 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.362734 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.362743 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.362771 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.362781 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.458314 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.458383 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.458417 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458557 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458593 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458644 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458657 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458686 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:42.458659546 +0000 UTC m=+86.587788093 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458723 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:42.458705317 +0000 UTC m=+86.587833774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458574 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458742 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458768 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:34 crc kubenswrapper[4857]: E0318 14:01:34.458796 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:42.45878937 +0000 UTC m=+86.587917937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.465111 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.465157 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.465174 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.465200 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.465213 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.568226 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.568279 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.568289 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.568305 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.568315 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.674114 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.674182 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.674281 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.674429 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.674939 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.778317 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.778387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.778412 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.778446 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.778473 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.881478 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.881529 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.881541 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.881556 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.881566 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.983893 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.984019 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.984037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.984060 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:34 crc kubenswrapper[4857]: I0318 14:01:34.984076 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:34Z","lastTransitionTime":"2026-03-18T14:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.042703 4857 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.086643 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.086690 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.086699 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.086717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.086728 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.162587 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.162671 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:35 crc kubenswrapper[4857]: E0318 14:01:35.163195 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:35 crc kubenswrapper[4857]: E0318 14:01:35.163374 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.189559 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.189614 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.189628 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.189647 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.189662 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.292674 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.292711 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.292719 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.292732 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.292741 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.395449 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.395482 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.395491 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.395504 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.395514 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.497582 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.497619 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.497629 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.497643 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.497651 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.600206 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.600273 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.600284 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.600299 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.600331 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.703135 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.703214 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.703230 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.703248 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.703261 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.805478 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.805513 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.805523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.805537 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.805546 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.907843 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.907891 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.907904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.907924 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:35 crc kubenswrapper[4857]: I0318 14:01:35.907935 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:35Z","lastTransitionTime":"2026-03-18T14:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.010376 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.010416 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.010427 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.010443 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.010454 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.113973 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.114025 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.114036 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.114056 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.114070 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.163618 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.163630 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:36 crc kubenswrapper[4857]: E0318 14:01:36.163823 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:36 crc kubenswrapper[4857]: E0318 14:01:36.163895 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.216470 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.216533 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.216548 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.216566 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.216577 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.319126 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.319171 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.319182 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.319201 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.319214 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.421883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.421939 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.421949 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.421964 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.421974 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.524938 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.525018 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.525045 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.525069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.525083 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.628477 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.628516 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.628526 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.628554 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.628571 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.731443 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.731477 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.731486 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.731502 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.731513 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.834715 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.834786 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.834798 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.834816 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.834827 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.937383 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.937436 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.937447 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.937494 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:36 crc kubenswrapper[4857]: I0318 14:01:36.937509 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:36Z","lastTransitionTime":"2026-03-18T14:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.040434 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.040489 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.040502 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.040520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.040529 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.143808 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.143873 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.143890 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.143913 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.143925 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.163939 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.164045 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.164276 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.164395 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.176654 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.189281 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.204851 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.217881 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.228053 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.240881 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.246931 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.246990 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.247003 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.247023 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.247036 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.254676 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.265377 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.272612 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.288007 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.288343 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.288528 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:01:45.28849054 +0000 UTC m=+89.417618997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.291553 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.305602 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.318712 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.328568 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.342429 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.350892 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.350949 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.350962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.350985 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.350999 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.353739 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.407664 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.407774 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.407798 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.407827 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.407844 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.422146 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.428000 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.428058 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.428071 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.428092 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.428105 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.442903 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.448073 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.448125 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.448139 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.448159 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.448171 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.462314 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.470796 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.470848 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.470863 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.470888 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.470903 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.483224 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.486858 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.486909 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.486921 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.486942 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.486955 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.497500 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:37 crc kubenswrapper[4857]: E0318 14:01:37.497673 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.499354 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.499408 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.499424 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.499448 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.499464 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.602241 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.602287 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.602299 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.602316 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.602327 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.705449 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.705519 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.705534 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.705553 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.705959 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.807855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.807901 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.807912 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.807931 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.807947 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.910542 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.910601 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.910616 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.910637 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:37 crc kubenswrapper[4857]: I0318 14:01:37.910652 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:37Z","lastTransitionTime":"2026-03-18T14:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.013739 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.013804 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.013814 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.013834 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.013847 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.115612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.115666 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.115676 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.115693 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.115703 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.163396 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:38 crc kubenswrapper[4857]: E0318 14:01:38.163536 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.163800 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:38 crc kubenswrapper[4857]: E0318 14:01:38.163994 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.218087 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.218330 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.218414 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.218494 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.218624 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.321482 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.321556 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.321568 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.321611 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.321627 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.424336 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.424412 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.424426 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.424441 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.424458 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.526383 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.526429 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.526441 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.526460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.526663 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.631142 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.631218 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.631235 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.631262 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.631279 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.734789 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.734843 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.734857 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.734877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.734890 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.837313 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.837351 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.837360 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.837375 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.837388 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.940032 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.940657 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.940684 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.940723 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:38 crc kubenswrapper[4857]: I0318 14:01:38.940736 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:38Z","lastTransitionTime":"2026-03-18T14:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.043328 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.043368 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.043379 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.043395 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.043406 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.145892 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.145948 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.145959 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.145975 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.145987 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.162729 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.162874 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:39 crc kubenswrapper[4857]: E0318 14:01:39.163075 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:39 crc kubenswrapper[4857]: E0318 14:01:39.163588 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.249516 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.249890 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.250037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.250157 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.250269 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.352730 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.352802 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.352814 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.352831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.352842 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.479910 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.479945 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.479954 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.479967 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.479976 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.582692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.582772 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.582787 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.582807 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.582820 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.685841 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.685896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.685918 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.685940 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.685957 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.788737 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.788805 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.788813 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.788826 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.788836 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.861835 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerStarted","Data":"7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.863991 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dw9w7" event={"ID":"d4bb5036-d0de-4152-af7f-1ef602441c3c","Type":"ContainerStarted","Data":"da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.873297 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.884904 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.891527 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.891584 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.891596 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.891614 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.891635 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.897715 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.908152 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.925001 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.935318 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.946434 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.954695 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.964813 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.975795 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.985655 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.994296 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.994338 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.994352 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.994370 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.994382 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:39Z","lastTransitionTime":"2026-03-18T14:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:39 crc kubenswrapper[4857]: I0318 14:01:39.997942 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.008663 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.020452 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.030191 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.041420 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.052264 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.063372 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.073261 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.084303 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.098942 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.111952 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.120423 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.132378 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.146891 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.160505 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.256499 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.256985 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.257019 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.257706 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:40 crc kubenswrapper[4857]: E0318 14:01:40.257727 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:40 crc kubenswrapper[4857]: E0318 14:01:40.257909 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:40 crc kubenswrapper[4857]: E0318 14:01:40.258482 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.258698 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.258738 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.258771 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.258789 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.258807 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.272095 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.282352 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.301390 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.362927 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.362961 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.362969 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.362983 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.362994 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.519312 4857 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.552175 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.552222 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.552236 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.552257 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.552272 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.656605 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.656668 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.656681 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.656701 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.656712 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.779961 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.780042 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.780065 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.780099 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.780135 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.873246 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.875506 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" event={"ID":"667fa6db-20a9-4b0f-990e-1a26e6de3207","Type":"ContainerStarted","Data":"5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.879166 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rp52k" event={"ID":"aeb3da01-2d25-4561-9674-063dd5bb41a4","Type":"ContainerStarted","Data":"7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.881484 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.883404 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.883462 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.883484 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.883499 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.883508 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.986834 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.986877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.986888 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.986903 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:40 crc kubenswrapper[4857]: I0318 14:01:40.986915 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:40Z","lastTransitionTime":"2026-03-18T14:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.121710 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.121794 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.121813 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.121833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.121856 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.163214 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:41 crc kubenswrapper[4857]: E0318 14:01:41.163473 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.223691 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.223724 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.223735 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.223764 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.223778 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.326088 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.326134 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.326146 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.326166 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.326184 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.437293 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.437331 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.437349 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.437388 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.437404 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.631123 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.631449 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.631460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.631483 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.631502 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.734772 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.734843 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.734872 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.734899 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.734920 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.838376 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.838460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.838475 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.838501 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.838516 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.971414 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.971466 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.971490 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.971520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.971544 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:41Z","lastTransitionTime":"2026-03-18T14:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:41 crc kubenswrapper[4857]: I0318 14:01:41.972604 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.005460 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.005784 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.008804 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.011801 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" event={"ID":"667fa6db-20a9-4b0f-990e-1a26e6de3207","Type":"ContainerStarted","Data":"54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.015036 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerStarted","Data":"1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.019971 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.054147 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.060252 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.067223 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.073675 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.073788 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.073813 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.073830 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.073864 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.162870 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.162856 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.162939 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.163139 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.163150 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.163260 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.163333 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.173368 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.197704 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.197797 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.197819 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.197869 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.197889 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.219950 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.233069 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.302289 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.302344 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.302360 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.302387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.302408 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.319592 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.332680 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.453210 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.453426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.453602 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.453897 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:01:58.453807553 +0000 UTC m=+102.582936010 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.454164 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:58.45409231 +0000 UTC m=+102.583220767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.456740 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.456800 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.456812 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.456840 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.456863 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.535483 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.545079 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.554001 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.554057 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.554111 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554221 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554273 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:58.55425975 +0000 UTC m=+102.683388207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554334 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554383 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554401 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554489 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:58.554463446 +0000 UTC m=+102.683591953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554689 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554794 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.554888 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:42 crc kubenswrapper[4857]: E0318 14:01:42.555013 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:01:58.554994809 +0000 UTC m=+102.684123266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.557802 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.559494 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.559595 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.559694 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.559795 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.559861 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.569719 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.583211 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.595498 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.608178 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.619376 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.634730 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.675443 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.677362 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.677412 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.677425 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.677447 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.677464 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.690295 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.700860 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.712995 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.720517 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.739367 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.750670 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.761454 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.770898 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.779893 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.779940 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.779953 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.779972 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.779986 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.783526 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.795638 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.884482 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.884560 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.884581 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.884621 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.884656 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.987965 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.988019 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.988030 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.988052 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:42 crc kubenswrapper[4857]: I0318 14:01:42.988065 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:42Z","lastTransitionTime":"2026-03-18T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.023687 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb" exitCode=0 Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.023820 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.042566 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.059655 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.081057 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.091999 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.092088 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.092126 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.092170 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.092201 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.107828 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.166225 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:43 crc kubenswrapper[4857]: E0318 14:01:43.166326 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.180795 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.194978 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.195056 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.195072 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.195089 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.195097 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.201668 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.216666 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.237496 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.254388 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.271599 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.291100 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.298090 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.298124 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.298133 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.298148 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.298158 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.342920 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.359325 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.388531 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.401947 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.401990 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.402018 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.402047 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.402059 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.522165 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.522219 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.522236 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.522257 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.522269 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.527567 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:43Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.771672 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.771714 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.771723 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.771738 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.771748 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.874988 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.875049 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.875065 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.875096 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.875121 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.977037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.977079 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.977090 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.977106 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:43 crc kubenswrapper[4857]: I0318 14:01:43.977124 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:43Z","lastTransitionTime":"2026-03-18T14:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.054933 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655" exitCode=0 Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.055005 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.060710 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.060781 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.074125 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.088822 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.097736 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.097802 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.097815 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.097835 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.097847 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.114701 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.212703 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.213222 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.213261 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:44 crc kubenswrapper[4857]: E0318 14:01:44.213502 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.214046 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:44 crc kubenswrapper[4857]: E0318 14:01:44.214249 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:44 crc kubenswrapper[4857]: E0318 14:01:44.220473 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.230312 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.230353 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.230364 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.230381 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.230391 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.236592 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.260709 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.278772 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.300305 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.318985 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.333087 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.342839 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.342885 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.342894 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.342920 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.342934 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.350845 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.536378 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.540023 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.540060 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.540070 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.540087 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.540096 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.552646 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.566150 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.583430 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:44Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.648460 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.648496 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.648504 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.648519 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.648528 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.785020 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.785079 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.785100 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.785138 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.785163 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.889195 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.889572 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.889744 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.889888 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.889987 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.992767 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.992807 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.992817 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.992830 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:44 crc kubenswrapper[4857]: I0318 14:01:44.992839 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:44Z","lastTransitionTime":"2026-03-18T14:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.101580 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.101608 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.101618 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.101636 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.101649 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.130822 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.130886 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.163414 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:45 crc kubenswrapper[4857]: E0318 14:01:45.163653 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.356723 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:45 crc kubenswrapper[4857]: E0318 14:01:45.356957 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:45 crc kubenswrapper[4857]: E0318 14:01:45.357033 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:02:01.357011048 +0000 UTC m=+105.486139505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.460791 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.460842 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.460856 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.460874 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.460885 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.563623 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.563657 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.563669 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.563683 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.563693 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.699777 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.699839 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.699866 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.699901 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.699922 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.804323 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.804706 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.804719 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.804738 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.804771 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.997088 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.997127 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.997138 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.997155 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:45 crc kubenswrapper[4857]: I0318 14:01:45.997166 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:45Z","lastTransitionTime":"2026-03-18T14:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.100913 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.100971 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.100992 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.101020 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.101035 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.162838 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:46 crc kubenswrapper[4857]: E0318 14:01:46.163041 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.163680 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:46 crc kubenswrapper[4857]: E0318 14:01:46.163837 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.163923 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:46 crc kubenswrapper[4857]: E0318 14:01:46.164031 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.206283 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.206328 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.206339 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.206354 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.206365 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.209051 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerStarted","Data":"7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.212498 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.212531 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.224226 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.240306 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.261808 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.276889 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.318034 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.318082 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.318094 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.318109 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.318120 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.329867 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.341978 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.357654 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.428420 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.428508 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.428541 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.428577 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.428601 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.488523 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.506546 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.519348 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.531187 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.531224 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.531236 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.531253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.531265 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.532965 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.549308 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.687007 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.688801 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.688824 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.688832 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.688845 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.688854 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.701827 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.716164 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:46Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.791814 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.791883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.791904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.791961 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.791984 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.894587 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.894616 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.894625 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.894642 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.894650 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.997528 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.997566 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.997576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.997591 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:46 crc kubenswrapper[4857]: I0318 14:01:46.997601 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:46Z","lastTransitionTime":"2026-03-18T14:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.101948 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.102268 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.102283 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.102303 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.102316 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.169346 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.169497 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.234735 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.234834 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.234849 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.234875 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.234891 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.252524 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.277728 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.298966 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.349097 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.349129 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.349138 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.349151 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.349163 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.354468 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.373065 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.397632 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.452252 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.452339 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.452367 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.452413 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.452445 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.526881 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.545183 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.555356 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.555406 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.555418 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.555438 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.555450 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.563632 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.593915 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.610299 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.610395 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.610413 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.610444 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.610457 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.716103 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.716235 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.728577 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.728628 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.728639 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.728657 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.728671 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.735417 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.751221 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.755418 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.755449 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.755476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.755491 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.755501 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.762272 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.770430 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.779468 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.779541 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.779571 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.779592 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.779603 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.781360 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.793634 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.793856 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.799858 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.799896 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.799905 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.799919 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.799929 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.812833 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:47 crc kubenswrapper[4857]: E0318 14:01:47.812948 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.815454 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.815490 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.815506 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.815524 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.815536 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.918187 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.918239 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.918250 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.918265 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:47 crc kubenswrapper[4857]: I0318 14:01:47.918277 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:47Z","lastTransitionTime":"2026-03-18T14:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.049042 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.049472 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.049544 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.049564 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.049576 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.153540 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.153629 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.153644 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.153700 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.153744 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.163884 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.163930 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.163978 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:48 crc kubenswrapper[4857]: E0318 14:01:48.164079 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:48 crc kubenswrapper[4857]: E0318 14:01:48.164190 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:48 crc kubenswrapper[4857]: E0318 14:01:48.164321 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.249136 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736" exitCode=0 Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.249220 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255226 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255732 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255812 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255834 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.255824 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.269112 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.370154 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413564 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413797 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413810 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413828 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.413839 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.429033 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.440831 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.461279 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.473105 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.486418 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.498459 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.511244 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.520641 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.520692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.520703 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.520720 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.520732 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.532129 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.718141 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.718180 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.718189 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.718204 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.718212 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.730157 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.744562 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.757556 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.773853 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:48Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.821117 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.821167 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.821176 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.821193 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.821205 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.923921 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.923964 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.923973 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.923989 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:48 crc kubenswrapper[4857]: I0318 14:01:48.923998 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:48Z","lastTransitionTime":"2026-03-18T14:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.026672 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.026726 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.026740 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.026782 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.026798 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.129526 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.129592 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.129612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.129630 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.129643 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.163280 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:49 crc kubenswrapper[4857]: E0318 14:01:49.163409 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.233232 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.233274 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.233284 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.233301 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.233314 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.268949 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca" exitCode=0 Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.269186 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.294574 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.308357 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.326054 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.336732 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.336780 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.336790 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.336805 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.336814 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.340108 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.356147 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.374145 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.391301 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.407964 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.423702 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.439203 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.439486 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.439575 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.439619 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.440007 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.440190 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.454221 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.480470 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.494921 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.514213 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.528342 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:49Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.543329 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.543364 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.543373 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.543387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.543397 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.727831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.727877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.727889 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.727908 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.727921 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.830149 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.830205 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.830217 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.830237 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.830250 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.932944 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.932973 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.932981 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.932996 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:49 crc kubenswrapper[4857]: I0318 14:01:49.933006 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:49Z","lastTransitionTime":"2026-03-18T14:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.036924 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.036962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.036980 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.037008 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.037030 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.139806 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.139844 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.139855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.139877 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.139890 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.162876 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:50 crc kubenswrapper[4857]: E0318 14:01:50.163106 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.163679 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:50 crc kubenswrapper[4857]: E0318 14:01:50.163775 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.164516 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:50 crc kubenswrapper[4857]: E0318 14:01:50.164625 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.185611 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.243065 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.243105 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.243116 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.243134 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.243147 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.277601 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81" exitCode=0 Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.277677 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.279161 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.295469 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.304598 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.318290 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.331886 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.343394 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.345007 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.345040 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.345049 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.345063 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.345072 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.362944 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.379275 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.402213 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.415625 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.433513 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.446025 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.447535 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.447678 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.447836 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.447962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.448091 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.461307 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.477487 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.494217 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.508030 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.522601 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.533629 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.547632 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.550827 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.550853 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.550862 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.550875 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.550885 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.556872 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.573399 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.587847 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.599965 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.614358 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.628373 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.643406 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.653146 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.653173 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.653181 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.653194 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.653203 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.658481 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.670878 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.684206 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.701940 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.718181 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.729191 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.743446 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:50Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.755798 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.755851 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.755864 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.755883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.755896 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.860487 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.860825 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.860850 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.860883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:50 crc kubenswrapper[4857]: I0318 14:01:50.860907 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:50Z","lastTransitionTime":"2026-03-18T14:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.053478 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.053506 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.053516 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.053532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.053541 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.164564 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:51 crc kubenswrapper[4857]: E0318 14:01:51.165321 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.266605 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.266681 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.266700 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.266728 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.266748 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.290247 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.291482 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.291582 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.291793 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.301282 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerStarted","Data":"cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.311338 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.325840 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.326215 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.329536 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.348616 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.362819 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.369396 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.369448 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.369461 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.369480 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.369492 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.377466 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.397128 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.409398 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.422002 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.435806 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.451558 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.464202 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.490532 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.499544 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.499583 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.499604 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.499624 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.499639 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.505165 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.521321 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.535244 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.562462 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.581451 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.597115 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.613861 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.614645 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.614671 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.614679 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.614693 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.614701 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.630379 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.645608 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.657452 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.672983 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.687054 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.703673 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.717601 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.717655 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.717680 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.717699 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.717713 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.720999 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.741967 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.759803 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.776555 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.791223 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.811319 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.820618 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.820652 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.820661 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.820675 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.820685 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:51Z","lastTransitionTime":"2026-03-18T14:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:51 crc kubenswrapper[4857]: I0318 14:01:51.824042 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:51Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.016851 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.016882 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.016891 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.016904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.016912 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.119302 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.119469 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.119503 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.119567 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.119592 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.163088 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.163112 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.163188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:52 crc kubenswrapper[4857]: E0318 14:01:52.163261 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:52 crc kubenswrapper[4857]: E0318 14:01:52.163470 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:52 crc kubenswrapper[4857]: E0318 14:01:52.163667 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.222555 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.222590 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.222602 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.222620 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.222631 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.459058 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.459101 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.459114 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.459131 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.459154 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.592129 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.592196 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.592219 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.592245 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.592262 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.694854 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.694894 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.694903 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.694918 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.694927 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.797169 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.797211 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.797221 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.797235 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.797244 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.902153 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.902291 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.902311 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.902339 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:52 crc kubenswrapper[4857]: I0318 14:01:52.902358 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:52Z","lastTransitionTime":"2026-03-18T14:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.005110 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.005154 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.005164 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.005179 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.005189 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.107135 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.107173 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.107184 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.107199 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.107211 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.163635 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:53 crc kubenswrapper[4857]: E0318 14:01:53.163828 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.210897 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.210958 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.210974 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.211011 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.211030 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.313253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.313564 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.313576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.313594 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.313604 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.416695 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.416780 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.416796 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.416814 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.416824 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.520682 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.520739 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.520768 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.520788 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.520799 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.623549 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.623595 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.623611 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.623628 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.623640 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.726619 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.726663 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.726803 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.726818 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.726827 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.829899 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.829934 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.829943 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.829956 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.829968 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.932253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.932290 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.932300 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.932314 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:53 crc kubenswrapper[4857]: I0318 14:01:53.932324 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:53Z","lastTransitionTime":"2026-03-18T14:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.035612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.035658 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.035671 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.035691 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.035702 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.138677 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.138724 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.138735 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.138769 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.138784 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.163106 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.163143 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.163190 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:54 crc kubenswrapper[4857]: E0318 14:01:54.163254 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:54 crc kubenswrapper[4857]: E0318 14:01:54.163318 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:54 crc kubenswrapper[4857]: E0318 14:01:54.163396 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.242779 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.242837 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.242861 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.242882 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.242894 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.313348 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2" exitCode=0 Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.313409 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.327360 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.341447 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.345760 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.345801 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.345811 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.345828 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.345841 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.352976 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.374697 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456523 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456725 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456772 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456781 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456799 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.456810 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.473507 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.489488 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.501045 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.512935 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.525000 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.535863 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.547302 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.559111 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.559151 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.559163 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.559180 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.559197 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.570391 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.585064 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.599075 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.617934 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:54Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.662474 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.662551 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.662576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.662600 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.662610 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.765636 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.765677 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.765693 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.765714 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.765733 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.867909 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.867962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.867974 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.867993 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.868006 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.971000 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.971088 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.971105 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.971127 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:54 crc kubenswrapper[4857]: I0318 14:01:54.971148 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:54Z","lastTransitionTime":"2026-03-18T14:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.074266 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.074318 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.074329 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.074351 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.074361 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.163461 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:55 crc kubenswrapper[4857]: E0318 14:01:55.163625 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.176619 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.176654 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.176664 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.176682 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.176695 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.485085 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.485122 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.485131 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.485144 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.485153 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.503770 4857 generic.go:334] "Generic (PLEG): container finished" podID="d9391c2e-3dc6-4162-8148-71972b9c14d3" containerID="19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7" exitCode=0 Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.503829 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerDied","Data":"19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.539623 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.581340 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.587782 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.587823 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.587833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.587848 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.587858 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.607322 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.619412 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.634783 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.647698 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.664893 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.687014 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.689640 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.689682 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.689692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.689711 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.689721 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.702162 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.716865 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.736259 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.751399 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.766808 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.782653 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.792362 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.792390 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.792399 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.792411 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.792422 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.795905 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.811574 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:55Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.894576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.894636 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.894648 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.894663 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.894673 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.997478 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.997519 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.997534 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.997553 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:55 crc kubenswrapper[4857]: I0318 14:01:55.997568 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:55Z","lastTransitionTime":"2026-03-18T14:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.100143 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.100201 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.100210 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.100223 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.100233 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.163052 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.163082 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.163172 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:56 crc kubenswrapper[4857]: E0318 14:01:56.163627 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:56 crc kubenswrapper[4857]: E0318 14:01:56.163704 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:56 crc kubenswrapper[4857]: E0318 14:01:56.163811 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.177670 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.202054 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.202104 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.202114 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.202128 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.202139 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.304593 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.304641 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.304654 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.304674 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.304688 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.412833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.413166 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.413181 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.413207 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.413219 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.510191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" event={"ID":"d9391c2e-3dc6-4162-8148-71972b9c14d3","Type":"ContainerStarted","Data":"74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.521181 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.533050 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.545454 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.554032 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.573483 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.586412 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.810438 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.810481 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.810490 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.810505 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.810515 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.823087 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.839093 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.891434 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.905278 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.912677 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.912701 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.912713 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.912729 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.912741 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:56Z","lastTransitionTime":"2026-03-18T14:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.924134 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.936972 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.951845 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.965529 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:56 crc kubenswrapper[4857]: I0318 14:01:56.987508 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:56Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.015536 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.015575 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.015586 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.015602 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.015614 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.021737 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.063649 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.115995 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3 is running failed: container process not found" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.116453 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3 is running failed: container process not found" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.116783 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3 is running failed: container process not found" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.116856 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.117260 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.117288 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.117298 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.117311 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.117321 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.163606 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.163734 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.176277 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.184925 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.203144 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.218168 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.219516 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.219592 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.219616 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.219651 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.219675 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.232078 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.241446 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.252553 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.265246 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.278897 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.290342 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.315274 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.322225 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.322269 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.322280 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.322296 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.322309 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.327533 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.339892 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.356344 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.372506 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.386697 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.398195 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.428096 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.428141 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.428150 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.428166 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.428177 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.514284 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/0.log" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.518087 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" exitCode=1 Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.518152 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.519000 4857 scope.go:117] "RemoveContainer" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.529466 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.529509 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.529521 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.529538 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.529548 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.537196 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.552719 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.570303 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.593152 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:01:57.054523 6703 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0318 14:01:57.054808 6703 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0318 14:01:57.054819 6703 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0318 14:01:57.054834 6703 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0318 14:01:57.054841 6703 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0318 14:01:57.054879 6703 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:01:57.054888 6703 handler.go:208] Removed *v1.Node event handler 7\\\\nI0318 14:01:57.054907 6703 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0318 14:01:57.054889 6703 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0318 14:01:57.054923 6703 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0318 14:01:57.054972 6703 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0318 14:01:57.054994 6703 factory.go:656] Stopping watch factory\\\\nI0318 14:01:57.054995 6703 handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:01:57.054999 6703 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0318 14:01:57.055010 6703 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.611891 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.626373 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.632429 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.632464 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.632476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.632493 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.632504 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.641433 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.654431 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.669564 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.685584 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.709508 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.720881 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.734176 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.737150 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.737178 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.737188 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.737208 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.737218 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.749327 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.764332 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.778333 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.793908 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.839916 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.839980 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.839991 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.840010 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.840022 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.841440 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.841488 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.841499 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.841516 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.841527 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.853871 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.857717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.857763 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.857776 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.857794 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.857804 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.869519 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.873150 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.873187 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.873198 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.873212 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.873221 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.886895 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.894263 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.894321 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.894333 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.894357 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.894388 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.909059 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.913501 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.913544 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.913556 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.913622 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.913634 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.926181 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:57Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:57 crc kubenswrapper[4857]: E0318 14:01:57.926301 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.943233 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.943267 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.943277 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.943301 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:57 crc kubenswrapper[4857]: I0318 14:01:57.943311 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:57Z","lastTransitionTime":"2026-03-18T14:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.046157 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.046197 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.046206 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.046220 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.046229 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.148671 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.148718 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.148728 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.148745 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.148772 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.162920 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.163054 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.163201 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.163189 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.163288 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.163363 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.252248 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.252284 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.252295 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.252313 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.252323 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.354730 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.354793 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.354805 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.354821 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.354836 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.457614 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.457662 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.457681 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.457696 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.457706 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.510805 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.511008 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:02:30.510972446 +0000 UTC m=+134.640100913 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.511190 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.511349 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.511440 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:02:30.511419898 +0000 UTC m=+134.640548375 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.524327 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/1.log" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.525013 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/0.log" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.527959 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a" exitCode=1 Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.528008 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.528049 4857 scope.go:117] "RemoveContainer" containerID="419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.529052 4857 scope.go:117] "RemoveContainer" containerID="d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.529275 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.544079 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.556713 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.561336 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.561376 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.561389 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.561406 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.561421 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.578625 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.587780 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.601632 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.612350 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.612407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.612435 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612542 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612579 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612607 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612605 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612629 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612638 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612653 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612622 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:02:30.612605043 +0000 UTC m=+134.741733500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612790 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:02:30.612769267 +0000 UTC m=+134.741897744 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:58 crc kubenswrapper[4857]: E0318 14:01:58.612808 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:02:30.612800408 +0000 UTC m=+134.741928875 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.617283 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.628901 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.642297 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.653932 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.664411 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.664481 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.664500 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.664526 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.664543 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.668466 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.687555 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.698623 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.710640 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.723224 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.733631 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.752020 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:01:57.054523 6703 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0318 14:01:57.054808 6703 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0318 14:01:57.054819 6703 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0318 14:01:57.054834 6703 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0318 14:01:57.054841 6703 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0318 14:01:57.054879 6703 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:01:57.054888 6703 handler.go:208] Removed *v1.Node event handler 7\\\\nI0318 14:01:57.054907 6703 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0318 14:01:57.054889 6703 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0318 14:01:57.054923 6703 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0318 14:01:57.054972 6703 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0318 14:01:57.054994 6703 factory.go:656] Stopping watch factory\\\\nI0318 14:01:57.054995 6703 handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:01:57.054999 6703 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0318 14:01:57.055010 6703 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:58Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0318 14:01:58.358535 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0318 14:01:58.358520 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sjqg6\\\\nI0318 14:01:58.358552 6918 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0318 14:01:58.358571 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.761850 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:01:58Z is after 2025-08-24T17:21:41Z" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.766189 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.766219 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.766228 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.766241 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.766251 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.868467 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.868535 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.868544 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.868560 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.868568 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.970594 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.970688 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.970702 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.970731 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:58 crc kubenswrapper[4857]: I0318 14:01:58.970744 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:58Z","lastTransitionTime":"2026-03-18T14:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.073131 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.073188 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.073201 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.073230 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.073249 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.165139 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:01:59 crc kubenswrapper[4857]: E0318 14:01:59.165251 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.176085 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.176122 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.176163 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.176177 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.176189 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.278870 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.278907 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.278923 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.278942 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.278956 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.381720 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.381766 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.381777 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.381793 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.381803 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.483796 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.483832 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.483840 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.483853 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.483862 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.533899 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/1.log" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.586868 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.586903 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.586911 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.586924 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.586932 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.689338 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.689369 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.689380 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.689396 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.689406 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.791944 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.792016 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.792029 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.792055 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.792064 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.894775 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.894816 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.894827 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.894843 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.894853 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.997126 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.997184 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.997195 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.997210 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:01:59 crc kubenswrapper[4857]: I0318 14:01:59.997221 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:01:59Z","lastTransitionTime":"2026-03-18T14:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.100221 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.100272 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.100296 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.100317 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.100336 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.163188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:00 crc kubenswrapper[4857]: E0318 14:02:00.163614 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.163216 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.163188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:00 crc kubenswrapper[4857]: E0318 14:02:00.163716 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:00 crc kubenswrapper[4857]: E0318 14:02:00.163827 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.203520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.203564 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.203575 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.203591 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.203600 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.306616 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.306666 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.306676 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.306692 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.306702 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.409136 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.409170 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.409178 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.409191 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.409199 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.512558 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.512611 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.512634 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.512658 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.512672 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.616420 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.616470 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.616484 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.616503 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.616516 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.719456 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.719532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.719556 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.719589 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.719611 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.823368 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.823428 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.823446 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.823474 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.823494 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.926182 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.926237 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.926258 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.926280 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:00 crc kubenswrapper[4857]: I0318 14:02:00.926296 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:00Z","lastTransitionTime":"2026-03-18T14:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.028661 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.028731 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.028774 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.028796 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.028813 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.133161 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.133234 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.133250 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.133276 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.133292 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.163868 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:01 crc kubenswrapper[4857]: E0318 14:02:01.164155 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.236939 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.237033 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.237049 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.237069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.237081 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.340095 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.340175 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.340200 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.340232 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.340262 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.441482 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:01 crc kubenswrapper[4857]: E0318 14:02:01.441738 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:02:01 crc kubenswrapper[4857]: E0318 14:02:01.441943 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:02:33.441895995 +0000 UTC m=+137.571024502 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.442854 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.442943 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.442976 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.443013 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.443070 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.544724 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.544838 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.544853 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.544868 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.544878 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.647663 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.647730 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.647810 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.647862 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.647956 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.750962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.751048 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.751098 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.751129 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.751149 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.853610 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.853655 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.853670 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.853689 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.853700 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.956135 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.956177 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.956189 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.956208 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:01 crc kubenswrapper[4857]: I0318 14:02:01.956222 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:01Z","lastTransitionTime":"2026-03-18T14:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.058923 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.058993 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.059016 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.059038 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.059054 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.161705 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.161792 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.161810 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.161839 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.161856 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.162627 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.162652 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.162666 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:02 crc kubenswrapper[4857]: E0318 14:02:02.162813 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:02 crc kubenswrapper[4857]: E0318 14:02:02.162928 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:02 crc kubenswrapper[4857]: E0318 14:02:02.163063 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.266598 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.266700 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.266735 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.266865 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.266957 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.370122 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.370161 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.370172 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.370188 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.370199 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.473309 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.473364 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.473379 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.473401 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.473415 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.576179 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.576220 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.576233 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.576249 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.576262 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.678886 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.678937 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.678951 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.678968 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.678982 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.782010 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.782081 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.782099 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.782125 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.782145 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.885463 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.885523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.885532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.885547 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.885556 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.988145 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.988196 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.988208 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.988225 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:02 crc kubenswrapper[4857]: I0318 14:02:02.988237 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:02Z","lastTransitionTime":"2026-03-18T14:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.091357 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.091416 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.091433 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.091459 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.091477 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.163186 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:03 crc kubenswrapper[4857]: E0318 14:02:03.163371 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.194953 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.195017 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.195034 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.195063 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.195081 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.299111 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.299199 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.299229 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.299260 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.299287 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.402888 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.403665 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.403722 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.403790 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.403811 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.507013 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.507070 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.507085 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.507103 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.507114 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.609530 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.609804 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.609881 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.609960 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.610022 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.713134 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.713205 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.713228 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.713258 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.713280 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.816300 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.816361 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.816379 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.816402 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.816418 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.920013 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.920055 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.920064 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.920078 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:03 crc kubenswrapper[4857]: I0318 14:02:03.920087 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:03Z","lastTransitionTime":"2026-03-18T14:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.022588 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.022647 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.022657 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.022672 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.022683 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.125635 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.125688 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.125702 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.125719 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.125731 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.162966 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.163036 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.162989 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:04 crc kubenswrapper[4857]: E0318 14:02:04.163131 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:04 crc kubenswrapper[4857]: E0318 14:02:04.163268 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:04 crc kubenswrapper[4857]: E0318 14:02:04.163348 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.227913 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.227998 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.228011 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.228038 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.228061 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.331403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.331452 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.331467 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.331486 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.331498 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.434214 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.434270 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.434282 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.434300 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.434314 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.537253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.537290 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.537324 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.537340 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.537350 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.640217 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.640275 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.640288 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.640305 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.640316 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.742833 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.742866 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.742875 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.742890 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.742899 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.845667 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.845718 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.845727 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.845761 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.845771 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.948135 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.948188 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.948203 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.948222 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:04 crc kubenswrapper[4857]: I0318 14:02:04.948237 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:04Z","lastTransitionTime":"2026-03-18T14:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.051140 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.051173 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.051181 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.051195 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.051204 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.153792 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.153850 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.153861 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.153879 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.153894 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.163146 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:05 crc kubenswrapper[4857]: E0318 14:02:05.163312 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.255845 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.255920 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.255932 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.255948 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.255957 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.358681 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.358775 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.358789 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.358820 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.358828 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.461439 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.461473 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.461482 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.461496 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.461507 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.563421 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.563476 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.563490 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.563506 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.563517 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.666374 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.666416 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.666425 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.666445 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.666458 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.768493 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.768552 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.768567 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.768585 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.768597 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.872192 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.872257 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.872270 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.872286 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.872296 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.975014 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.975057 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.975069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.975086 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:05 crc kubenswrapper[4857]: I0318 14:02:05.975098 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:05Z","lastTransitionTime":"2026-03-18T14:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.077550 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.077587 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.077598 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.077612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.077631 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.162736 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.162818 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:06 crc kubenswrapper[4857]: E0318 14:02:06.162903 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.162827 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:06 crc kubenswrapper[4857]: E0318 14:02:06.162991 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:06 crc kubenswrapper[4857]: E0318 14:02:06.163079 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.180243 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.180302 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.180319 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.180342 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.180360 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.282640 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.282679 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.282688 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.282702 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.282711 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.385369 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.385419 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.385433 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.385454 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.385470 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.488455 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.488523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.488536 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.488563 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.488581 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.591450 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.591577 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.591588 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.591602 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.591610 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.694540 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.694583 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.694592 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.694608 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.694618 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.797347 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.797384 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.797395 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.797411 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.797423 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.900180 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.900253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.900266 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.900285 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:06 crc kubenswrapper[4857]: I0318 14:02:06.900324 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:06Z","lastTransitionTime":"2026-03-18T14:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.003535 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.003576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.003588 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.003607 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.003621 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.106648 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.106702 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.106717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.106740 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.106778 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.163111 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:07 crc kubenswrapper[4857]: E0318 14:02:07.163272 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.180282 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.195978 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.209282 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.209639 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.209828 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.210021 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.210196 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.211894 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.232798 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.246589 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.262855 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.278547 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.296484 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.312928 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.312964 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.312976 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.312991 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.313000 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.313711 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.326610 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.344004 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.361876 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.375696 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.399476 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://419592716c423d7003826a13b1afd8fed3bc69fab3c38d41cc9b198570fb0fd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:57Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:01:57.054523 6703 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0318 14:01:57.054808 6703 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0318 14:01:57.054819 6703 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0318 14:01:57.054834 6703 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0318 14:01:57.054841 6703 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0318 14:01:57.054879 6703 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:01:57.054888 6703 handler.go:208] Removed *v1.Node event handler 7\\\\nI0318 14:01:57.054907 6703 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0318 14:01:57.054889 6703 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0318 14:01:57.054923 6703 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0318 14:01:57.054972 6703 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0318 14:01:57.054994 6703 factory.go:656] Stopping watch factory\\\\nI0318 14:01:57.054995 6703 handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:01:57.054999 6703 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0318 14:01:57.055010 6703 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:58Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0318 14:01:58.358535 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0318 14:01:58.358520 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sjqg6\\\\nI0318 14:01:58.358552 6918 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0318 14:01:58.358571 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416035 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416380 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416698 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416711 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416731 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.416744 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.433190 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.446848 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:07Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.519843 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.519887 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.519900 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.519920 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.519937 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.623301 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.623352 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.623361 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.623378 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.623388 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.726291 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.726474 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.726562 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.726656 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.726817 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.829778 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.829849 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.829864 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.829883 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.829895 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.932887 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.932943 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.932956 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.932983 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:07 crc kubenswrapper[4857]: I0318 14:02:07.932998 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:07Z","lastTransitionTime":"2026-03-18T14:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.037037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.037128 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.037143 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.037168 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.037180 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.052897 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.053027 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.053041 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.053059 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.053072 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.072501 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:08Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.078259 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.078311 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.078324 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.078345 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.078357 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.093478 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:08Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.098705 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.098765 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.098781 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.098798 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.098811 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.117262 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:08Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.120982 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.121015 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.121026 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.121043 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.121056 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.135210 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:08Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.139052 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.139107 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.139118 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.139136 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.139148 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.150181 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:08Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.150298 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.151670 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.151717 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.151730 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.151763 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.151775 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.163184 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.163249 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.163348 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.163469 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.163685 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:08 crc kubenswrapper[4857]: E0318 14:02:08.163934 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.255190 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.255272 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.255286 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.255307 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.255322 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.359073 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.359142 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.359154 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.359175 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.359188 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.462252 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.462351 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.462363 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.462385 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.462397 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.564450 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.564520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.564532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.564547 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.564555 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.667881 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.667942 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.667955 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.667979 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.667991 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.770486 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.770531 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.770542 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.770559 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.770570 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.876907 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.876968 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.876991 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.877014 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.877027 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.980343 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.980394 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.980407 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.980424 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:08 crc kubenswrapper[4857]: I0318 14:02:08.980435 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:08Z","lastTransitionTime":"2026-03-18T14:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.083472 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.083524 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.083534 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.083552 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.083563 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.163591 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:09 crc kubenswrapper[4857]: E0318 14:02:09.163850 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.164741 4857 scope.go:117] "RemoveContainer" containerID="d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.186463 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.186509 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.186523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.186542 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.186557 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.187512 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.201385 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.219105 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.236910 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.248534 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.269492 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:58Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0318 14:01:58.358535 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0318 14:01:58.358520 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sjqg6\\\\nI0318 14:01:58.358552 6918 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0318 14:01:58.358571 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.283120 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.289318 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.289523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.289654 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.289810 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.289920 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.296423 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.318458 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.334702 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.364396 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.377466 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.392887 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.392930 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.392941 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.392957 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.392972 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.394087 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.408958 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.424719 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.438273 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.452504 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.495388 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.495430 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.495440 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.495454 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.495462 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.574607 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/1.log" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.577599 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.578194 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.597737 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.597798 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.597812 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.597831 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.597842 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.599423 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.614446 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.631455 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.652038 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.669599 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.686123 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.700052 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.700107 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.700120 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.700141 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.700153 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.709089 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.730288 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.744298 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.759365 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.771610 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.785705 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.797651 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.802364 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.802409 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.802423 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.802441 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.802451 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.818118 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:58Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0318 14:01:58.358535 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0318 14:01:58.358520 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sjqg6\\\\nI0318 14:01:58.358552 6918 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0318 14:01:58.358571 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.830965 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.847318 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.860963 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:09Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.905031 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.905069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.905078 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.905093 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:09 crc kubenswrapper[4857]: I0318 14:02:09.905103 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:09Z","lastTransitionTime":"2026-03-18T14:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.007591 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.007626 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.007635 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.007651 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.007662 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.110731 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.110855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.110872 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.110902 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.110918 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.163271 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.163326 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.163369 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:10 crc kubenswrapper[4857]: E0318 14:02:10.163451 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:10 crc kubenswrapper[4857]: E0318 14:02:10.163583 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:10 crc kubenswrapper[4857]: E0318 14:02:10.163690 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.214666 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.214914 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.215015 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.215096 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.215158 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.318895 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.318990 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.319016 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.319047 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.319075 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.422584 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.422629 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.422643 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.422660 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.422671 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.526963 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.527022 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.527033 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.527054 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.527065 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.584794 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/2.log" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.586122 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/1.log" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.589948 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" exitCode=1 Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.590012 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.590067 4857 scope.go:117] "RemoveContainer" containerID="d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.591229 4857 scope.go:117] "RemoveContainer" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" Mar 18 14:02:10 crc kubenswrapper[4857]: E0318 14:02:10.591550 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.610972 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.624924 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.629793 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.629828 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.629838 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.629854 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.629865 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.643028 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.657783 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.670901 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.701203 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2f880d61a8d16fe0b068d2be778887d521506e5fe4b499448d50949227bb58a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:01:58Z\\\",\\\"message\\\":\\\"ssful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0318 14:01:58.358535 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0318 14:01:58.358520 6918 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sjqg6\\\\nI0318 14:01:58.358552 6918 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0318 14:01:58.358571 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.716462 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733159 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733205 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733217 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733237 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733249 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.733609 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.746994 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.761842 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.776248 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.814074 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.836124 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.836933 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.836984 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.836997 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.837015 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.837025 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.866708 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.881229 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.896786 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.914160 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:10Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.939474 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.939512 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.939552 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.939595 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:10 crc kubenswrapper[4857]: I0318 14:02:10.939610 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:10Z","lastTransitionTime":"2026-03-18T14:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.042278 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.042328 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.042337 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.042352 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.042362 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.144463 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.144497 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.144507 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.144520 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.144528 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.163019 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:11 crc kubenswrapper[4857]: E0318 14:02:11.163215 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.247193 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.247241 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.247251 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.247268 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.247286 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.349816 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.349936 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.349956 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.350028 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.350046 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.452337 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.452374 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.452384 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.452415 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.452426 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.555016 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.555068 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.555078 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.555093 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.555105 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.597575 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/2.log" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.602598 4857 scope.go:117] "RemoveContainer" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" Mar 18 14:02:11 crc kubenswrapper[4857]: E0318 14:02:11.602832 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.620627 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.634380 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.658546 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.658611 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.658629 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.658659 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.658676 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.664207 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.682267 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.697692 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.713850 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.733804 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.747564 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.762080 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.762156 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.762170 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.762194 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.762207 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.764981 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.797730 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.811815 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.828385 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.843706 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.859997 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.865403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.865466 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.865481 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.865506 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.865521 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.877236 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.889837 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.903698 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:11Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.968887 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.968955 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.968969 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.968992 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:11 crc kubenswrapper[4857]: I0318 14:02:11.969006 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:11Z","lastTransitionTime":"2026-03-18T14:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.071621 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.071665 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.071676 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.071695 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.071707 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.163548 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.163638 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.163566 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:12 crc kubenswrapper[4857]: E0318 14:02:12.163853 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:12 crc kubenswrapper[4857]: E0318 14:02:12.164057 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:12 crc kubenswrapper[4857]: E0318 14:02:12.164252 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.175530 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.175745 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.175924 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.176022 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.176126 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.279566 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.279652 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.279666 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.279686 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.279704 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.383072 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.383156 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.383179 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.383209 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.383229 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.487035 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.487093 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.487104 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.487124 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.487138 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.589566 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.589634 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.589647 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.589669 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.589682 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.692891 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.692961 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.692976 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.693006 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.693021 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.796780 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.796826 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.796836 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.796855 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.796867 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.900849 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.900919 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.900932 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.900964 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:12 crc kubenswrapper[4857]: I0318 14:02:12.900979 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:12Z","lastTransitionTime":"2026-03-18T14:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.004895 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.004952 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.004961 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.004983 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.004998 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.108508 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.108567 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.108580 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.108603 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.108616 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.163634 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:13 crc kubenswrapper[4857]: E0318 14:02:13.164629 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.212714 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.212847 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.212868 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.212898 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.212916 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.316493 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.316568 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.316588 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.316612 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.316634 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.419994 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.420052 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.420069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.420097 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.420112 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.522897 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.522973 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.522991 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.523015 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.523033 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.625924 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.625975 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.625984 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.626001 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.626014 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.729001 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.729076 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.729117 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.729152 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.729177 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.831926 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.831957 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.831965 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.831978 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.831988 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.934523 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.935013 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.935209 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.935348 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:13 crc kubenswrapper[4857]: I0318 14:02:13.935472 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:13Z","lastTransitionTime":"2026-03-18T14:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.039227 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.039276 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.039284 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.039299 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.039309 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.142873 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.142972 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.143000 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.143051 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.143075 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.163491 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.163596 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:14 crc kubenswrapper[4857]: E0318 14:02:14.163703 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.163607 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:14 crc kubenswrapper[4857]: E0318 14:02:14.163888 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:14 crc kubenswrapper[4857]: E0318 14:02:14.164047 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.246060 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.246170 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.246197 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.246259 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.246284 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.348931 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.349006 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.349040 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.349057 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.349067 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.451930 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.451962 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.451970 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.451983 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.451993 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.554350 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.554394 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.554406 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.554429 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.554441 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.657253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.657301 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.657325 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.657352 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.657371 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.759858 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.759930 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.759959 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.759981 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.759998 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.863171 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.863232 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.863248 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.863271 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.863286 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.966180 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.966226 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.966238 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.966256 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:14 crc kubenswrapper[4857]: I0318 14:02:14.966270 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:14Z","lastTransitionTime":"2026-03-18T14:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.068730 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.068816 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.068832 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.068857 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.068869 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.162985 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:15 crc kubenswrapper[4857]: E0318 14:02:15.163128 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.171424 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.171481 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.171492 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.171505 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.171513 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.274387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.274432 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.274445 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.274462 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.274474 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.377409 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.378387 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.378416 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.378439 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.378452 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.480416 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.480500 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.480517 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.480539 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.480550 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.583288 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.583337 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.583347 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.583367 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.583377 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.685995 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.686034 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.686045 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.686061 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.686072 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.789490 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.789554 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.789563 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.789589 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.789602 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.892150 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.892197 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.892209 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.892228 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.892240 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.995110 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.995174 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.995192 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.995213 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:15 crc kubenswrapper[4857]: I0318 14:02:15.995228 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:15Z","lastTransitionTime":"2026-03-18T14:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.097295 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.097361 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.097374 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.097393 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.097406 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.162894 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.162894 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.162918 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:16 crc kubenswrapper[4857]: E0318 14:02:16.163213 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:16 crc kubenswrapper[4857]: E0318 14:02:16.163269 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:16 crc kubenswrapper[4857]: E0318 14:02:16.163046 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.199884 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.199936 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.199956 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.199973 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.199987 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.302558 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.302628 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.302646 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.302666 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.302680 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.405029 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.405089 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.405106 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.405125 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.405138 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.507529 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.507578 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.507591 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.507611 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.507623 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.610320 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.610627 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.610637 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.610668 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.610680 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.713829 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.713904 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.713915 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.713930 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.713940 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.816603 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.816713 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.816732 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.816785 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.816808 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.919205 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.919289 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.919334 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.919356 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:16 crc kubenswrapper[4857]: I0318 14:02:16.919368 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:16Z","lastTransitionTime":"2026-03-18T14:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.022213 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.022282 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.022296 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.022332 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.022343 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:17Z","lastTransitionTime":"2026-03-18T14:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:17 crc kubenswrapper[4857]: E0318 14:02:17.122979 4857 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.162861 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:17 crc kubenswrapper[4857]: E0318 14:02:17.162977 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.174418 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.179443 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.191810 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.213959 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.227222 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.240181 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.250875 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.270415 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.282024 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.293278 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.305254 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.317796 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.331215 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.342299 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.357327 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.370531 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.380387 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: I0318 14:02:17.391855 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:17Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:17 crc kubenswrapper[4857]: E0318 14:02:17.600545 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.163489 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.163645 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.163739 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.163659 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.164042 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.164151 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.251598 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.251670 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.251689 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.251715 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.251733 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:18Z","lastTransitionTime":"2026-03-18T14:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.273913 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:18Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.278537 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.278570 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.278580 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.278595 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.278605 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:18Z","lastTransitionTime":"2026-03-18T14:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.292325 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:18Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.296189 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.296245 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.296256 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.296273 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.296283 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:18Z","lastTransitionTime":"2026-03-18T14:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.308520 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:18Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.312510 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.312563 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.312577 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.312599 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.312612 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:18Z","lastTransitionTime":"2026-03-18T14:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.326344 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:18Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.330472 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.330519 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.330532 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.330549 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:18 crc kubenswrapper[4857]: I0318 14:02:18.330562 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:18Z","lastTransitionTime":"2026-03-18T14:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.344859 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:18Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:18 crc kubenswrapper[4857]: E0318 14:02:18.344975 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:02:19 crc kubenswrapper[4857]: I0318 14:02:19.163440 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:19 crc kubenswrapper[4857]: E0318 14:02:19.163660 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:20 crc kubenswrapper[4857]: I0318 14:02:20.162894 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:20 crc kubenswrapper[4857]: I0318 14:02:20.163039 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:20 crc kubenswrapper[4857]: I0318 14:02:20.163015 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:20 crc kubenswrapper[4857]: E0318 14:02:20.163150 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:20 crc kubenswrapper[4857]: E0318 14:02:20.163518 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:20 crc kubenswrapper[4857]: E0318 14:02:20.163649 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:21 crc kubenswrapper[4857]: I0318 14:02:21.163017 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:21 crc kubenswrapper[4857]: E0318 14:02:21.163215 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:22 crc kubenswrapper[4857]: I0318 14:02:22.163136 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:22 crc kubenswrapper[4857]: I0318 14:02:22.163184 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:22 crc kubenswrapper[4857]: I0318 14:02:22.163343 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:22 crc kubenswrapper[4857]: E0318 14:02:22.163328 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:22 crc kubenswrapper[4857]: E0318 14:02:22.163512 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:22 crc kubenswrapper[4857]: E0318 14:02:22.163592 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:22 crc kubenswrapper[4857]: E0318 14:02:22.601802 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:23 crc kubenswrapper[4857]: I0318 14:02:23.163564 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:23 crc kubenswrapper[4857]: E0318 14:02:23.164402 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:23 crc kubenswrapper[4857]: I0318 14:02:23.164740 4857 scope.go:117] "RemoveContainer" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" Mar 18 14:02:23 crc kubenswrapper[4857]: E0318 14:02:23.165068 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:02:24 crc kubenswrapper[4857]: I0318 14:02:24.162553 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:24 crc kubenswrapper[4857]: I0318 14:02:24.162672 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:24 crc kubenswrapper[4857]: E0318 14:02:24.162720 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:24 crc kubenswrapper[4857]: I0318 14:02:24.162846 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:24 crc kubenswrapper[4857]: E0318 14:02:24.162930 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:24 crc kubenswrapper[4857]: E0318 14:02:24.163001 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:25 crc kubenswrapper[4857]: I0318 14:02:25.163935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:25 crc kubenswrapper[4857]: E0318 14:02:25.164191 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:26 crc kubenswrapper[4857]: I0318 14:02:26.163145 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:26 crc kubenswrapper[4857]: E0318 14:02:26.163304 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:26 crc kubenswrapper[4857]: I0318 14:02:26.163422 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:26 crc kubenswrapper[4857]: E0318 14:02:26.163683 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:26 crc kubenswrapper[4857]: I0318 14:02:26.163723 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:26 crc kubenswrapper[4857]: E0318 14:02:26.163852 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.162742 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:27 crc kubenswrapper[4857]: E0318 14:02:27.164062 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.181401 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.197435 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.212156 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.229045 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.252202 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.264124 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.282041 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.299997 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.318600 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.335938 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.350321 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.363699 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.376271 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.385541 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.406174 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.419580 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.434589 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: I0318 14:02:27.447203 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:27Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:27 crc kubenswrapper[4857]: E0318 14:02:27.603567 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.162712 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.162797 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.162868 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.162928 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.163023 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.163159 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.667010 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.667098 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.667120 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.667150 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.667174 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:28Z","lastTransitionTime":"2026-03-18T14:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.680907 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:28Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.686499 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.686558 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.686576 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.686605 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.686628 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:28Z","lastTransitionTime":"2026-03-18T14:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.703026 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:28Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.711238 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.711475 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.711495 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.711524 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.711543 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:28Z","lastTransitionTime":"2026-03-18T14:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.727840 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:28Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.732175 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.732218 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.732229 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.732248 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.732260 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:28Z","lastTransitionTime":"2026-03-18T14:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.748687 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:28Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.752567 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.752605 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.752614 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.752629 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:28 crc kubenswrapper[4857]: I0318 14:02:28.752638 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:28Z","lastTransitionTime":"2026-03-18T14:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.763774 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:28Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:28 crc kubenswrapper[4857]: E0318 14:02:28.763914 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:02:29 crc kubenswrapper[4857]: I0318 14:02:29.163162 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:29 crc kubenswrapper[4857]: E0318 14:02:29.163384 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:29 crc kubenswrapper[4857]: I0318 14:02:29.175998 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.163384 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.163408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.163555 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.163408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.163638 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.163685 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.558815 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.559002 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.559074 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:34.559037913 +0000 UTC m=+198.688166380 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.559099 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.559154 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:03:34.559140386 +0000 UTC m=+198.688268843 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.659647 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.659730 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:30 crc kubenswrapper[4857]: I0318 14:02:30.659801 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659880 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659932 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659945 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659982 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659958 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.659996 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.660001 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:03:34.659979448 +0000 UTC m=+198.789107925 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.660015 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.660059 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:03:34.6600406 +0000 UTC m=+198.789169047 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:02:30 crc kubenswrapper[4857]: E0318 14:02:30.660087 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:03:34.66006707 +0000 UTC m=+198.789195587 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:02:31 crc kubenswrapper[4857]: I0318 14:02:31.163417 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:31 crc kubenswrapper[4857]: E0318 14:02:31.163580 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:32 crc kubenswrapper[4857]: I0318 14:02:32.163480 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:32 crc kubenswrapper[4857]: E0318 14:02:32.163688 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:32 crc kubenswrapper[4857]: I0318 14:02:32.164037 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:32 crc kubenswrapper[4857]: I0318 14:02:32.164094 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:32 crc kubenswrapper[4857]: E0318 14:02:32.164206 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:32 crc kubenswrapper[4857]: E0318 14:02:32.164410 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:32 crc kubenswrapper[4857]: E0318 14:02:32.605027 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.163126 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:33 crc kubenswrapper[4857]: E0318 14:02:33.163284 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.491104 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:33 crc kubenswrapper[4857]: E0318 14:02:33.491284 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:02:33 crc kubenswrapper[4857]: E0318 14:02:33.491942 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:03:37.491920708 +0000 UTC m=+201.621049165 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.681307 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/0.log" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.681365 4857 generic.go:334] "Generic (PLEG): container finished" podID="0ca53fe8-513c-4226-8659-208b304ffb78" containerID="7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b" exitCode=1 Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.681413 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerDied","Data":"7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b"} Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.681875 4857 scope.go:117] "RemoveContainer" containerID="7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.695118 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.707777 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.720370 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.733573 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.747664 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.759326 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.772094 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.784173 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.802429 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.813206 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.828861 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.845531 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.866164 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"2026-03-18T14:01:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577\\\\n2026-03-18T14:01:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577 to /host/opt/cni/bin/\\\\n2026-03-18T14:01:47Z [verbose] multus-daemon started\\\\n2026-03-18T14:01:47Z [verbose] Readiness Indicator file check\\\\n2026-03-18T14:02:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.889314 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.907890 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.926289 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.938857 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.954988 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:33 crc kubenswrapper[4857]: I0318 14:02:33.970514 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:33Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.162837 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.162891 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.162837 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:34 crc kubenswrapper[4857]: E0318 14:02:34.163010 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:34 crc kubenswrapper[4857]: E0318 14:02:34.163057 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:34 crc kubenswrapper[4857]: E0318 14:02:34.163109 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.686045 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/0.log" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.686097 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerStarted","Data":"b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee"} Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.702680 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.714327 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.728378 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.746786 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.759120 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.772508 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.785336 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.801383 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.815287 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.835802 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.861482 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.878050 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.898103 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"2026-03-18T14:01:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577\\\\n2026-03-18T14:01:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577 to /host/opt/cni/bin/\\\\n2026-03-18T14:01:47Z [verbose] multus-daemon started\\\\n2026-03-18T14:01:47Z [verbose] Readiness Indicator file check\\\\n2026-03-18T14:02:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.918210 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.930888 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.954292 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.970693 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:34 crc kubenswrapper[4857]: I0318 14:02:34.985124 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:35 crc kubenswrapper[4857]: I0318 14:02:35.001203 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:34Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:35 crc kubenswrapper[4857]: I0318 14:02:35.162880 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:35 crc kubenswrapper[4857]: E0318 14:02:35.163046 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:36 crc kubenswrapper[4857]: I0318 14:02:36.163144 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:36 crc kubenswrapper[4857]: I0318 14:02:36.163173 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:36 crc kubenswrapper[4857]: I0318 14:02:36.163144 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:36 crc kubenswrapper[4857]: E0318 14:02:36.163278 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:36 crc kubenswrapper[4857]: E0318 14:02:36.163378 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:36 crc kubenswrapper[4857]: E0318 14:02:36.163529 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.163538 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:37 crc kubenswrapper[4857]: E0318 14:02:37.164856 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.165361 4857 scope.go:117] "RemoveContainer" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.182126 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.195034 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.216637 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.230446 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.244550 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.258479 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.274139 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.287963 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.301482 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.315582 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.329142 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.346977 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.360955 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.372329 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.385561 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"2026-03-18T14:01:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577\\\\n2026-03-18T14:01:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577 to /host/opt/cni/bin/\\\\n2026-03-18T14:01:47Z [verbose] multus-daemon started\\\\n2026-03-18T14:01:47Z [verbose] Readiness Indicator file check\\\\n2026-03-18T14:02:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.400182 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.412378 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.436588 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.447195 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: E0318 14:02:37.606309 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.698584 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/2.log" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.701826 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea"} Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.702299 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.721096 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.738689 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.754481 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"2026-03-18T14:01:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577\\\\n2026-03-18T14:01:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577 to /host/opt/cni/bin/\\\\n2026-03-18T14:01:47Z [verbose] multus-daemon started\\\\n2026-03-18T14:01:47Z [verbose] Readiness Indicator file check\\\\n2026-03-18T14:02:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.766604 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.775535 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.792079 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.804281 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.816130 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.826343 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.842933 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.852080 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.864103 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.879821 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.893022 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.907732 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.921368 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.935324 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.947256 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:37 crc kubenswrapper[4857]: I0318 14:02:37.960282 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:37Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:38 crc kubenswrapper[4857]: I0318 14:02:38.331162 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:38 crc kubenswrapper[4857]: I0318 14:02:38.331243 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:38 crc kubenswrapper[4857]: E0318 14:02:38.331322 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:38 crc kubenswrapper[4857]: E0318 14:02:38.331386 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:38 crc kubenswrapper[4857]: I0318 14:02:38.331458 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:38 crc kubenswrapper[4857]: I0318 14:02:38.331492 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:38 crc kubenswrapper[4857]: E0318 14:02:38.331549 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:38 crc kubenswrapper[4857]: E0318 14:02:38.331633 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.154468 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.154519 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.154528 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.154543 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.154551 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:39Z","lastTransitionTime":"2026-03-18T14:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.167673 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.171037 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.171069 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.171077 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.171089 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.171098 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:39Z","lastTransitionTime":"2026-03-18T14:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.183853 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.187320 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.187380 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.187389 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.187403 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.187412 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:39Z","lastTransitionTime":"2026-03-18T14:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.200451 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.204259 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.204303 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.204312 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.204328 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.204337 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:39Z","lastTransitionTime":"2026-03-18T14:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.216496 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.220434 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.220479 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.220489 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.220512 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.220532 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:39Z","lastTransitionTime":"2026-03-18T14:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.238930 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6b8b991-9330-4333-ba64-213d0025158e\\\",\\\"systemUUID\\\":\\\"9936aba9-9b46-46dc-9830-1269a6a97f25\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.239107 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.784838 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/3.log" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.785529 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/2.log" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.788053 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" exitCode=1 Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.788126 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea"} Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.788184 4857 scope.go:117] "RemoveContainer" containerID="22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.788966 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:02:39 crc kubenswrapper[4857]: E0318 14:02:39.789168 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.808362 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27f3c481-ef1a-4bf7-b415-fd8d017f98d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:57Z\\\",\\\"message\\\":\\\"W0318 14:00:56.678610 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0318 14:00:56.679046 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773842456 cert, and key in /tmp/serving-cert-745294003/serving-signer.crt, /tmp/serving-cert-745294003/serving-signer.key\\\\nI0318 14:00:56.907216 1 observer_polling.go:159] Starting file observer\\\\nW0318 14:00:56.916465 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0318 14:00:56.916644 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0318 14:00:56.917329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-745294003/tls.crt::/tmp/serving-cert-745294003/tls.key\\\\\\\"\\\\nF0318 14:00:57.486804 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.820587 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dw9w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4bb5036-d0de-4152-af7f-1ef602441c3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da28cd731f2671a642c922dc65fd9722faea2a29cee51170ebf7bb3b03e01ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rqh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dw9w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.835366 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bdlm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ca53fe8-513c-4226-8659-208b304ffb78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:33Z\\\",\\\"message\\\":\\\"2026-03-18T14:01:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577\\\\n2026-03-18T14:01:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e95990f-12aa-4fd1-8642-7c1e63f88577 to /host/opt/cni/bin/\\\\n2026-03-18T14:01:47Z [verbose] multus-daemon started\\\\n2026-03-18T14:01:47Z [verbose] Readiness Indicator file check\\\\n2026-03-18T14:02:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:02:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k54kd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bdlm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.850811 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f83af3c5170312071f6425a744629ac5bbedee0d7a986ca12066b677c005dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://312f3d9276b7daeb9410fa19080f40fe10c8e4968aa6bbe640ac27f73f484cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.863070 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rp52k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aeb3da01-2d25-4561-9674-063dd5bb41a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7daa5dc523faf5c3a4e98725e4161a734cfae04cce76999f062e3804c4195c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rfkph\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rp52k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.884145 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22b9e7aab5e8ee54e8c74e718c05ed555e758a906e2ef3639ff8c114fa59eb1c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:10Z\\\",\\\"message\\\":\\\" handler.go:208] Removed *v1.Node event handler 2\\\\nI0318 14:02:10.047359 7074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0318 14:02:10.048669 7074 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048775 7074 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:10.048934 7074 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:10.049084 7074 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:10.049375 7074 factory.go:656] Stopping watch factory\\\\nI0318 14:02:10.049495 7074 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0318 14:02:10.049520 7074 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0318 14:02:10.049856 7074 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:10.049932 7074 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:10.050078 7074 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-18T14:02:39Z\\\",\\\"message\\\":\\\"etes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:39.098850 7358 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:39.098966 7358 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0318 14:02:39.099009 7358 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:39.098851 7358 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:39.099171 7358 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0318 14:02:39.099426 7358 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0318 14:02:39.099681 7358 factory.go:656] Stopping watch factory\\\\nI0318 14:02:39.099790 7358 ovnkube.go:599] Stopped ovnkube\\\\nI0318 14:02:39.099840 7358 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0318 14:02:39.099920 7358 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:02:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8nhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bpx9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.900744 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"667fa6db-20a9-4b0f-990e-1a26e6de3207\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a7981350f18faa865dc023cf899052c733e8c790c87c70e614dbd1b7f8a3228\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54f452f21a69729129618262311d45abf1c5d1952923ab021c99823ea32ab76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sk94h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wvdxw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.915391 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0b056f874e4730acdf9ee1a39a186c0f5a07df2a3ea05279d75640896c91f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.930179 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb942ab9-842d-4078-9789-2fe1788b4dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8g74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f7vgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.944569 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ced41a8f87eb56a219d7ded9c7270fecfb6600883b0d300949e1bd9684eb34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.958835 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.985522 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faadbc4d-343f-444b-a2a2-76c67b8b0cae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb1b0f5925279aa3f748ec05933a05d2955a28ada3cf14d18a989358ce93146\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05294eacdb478196021cfba2786f02e1bc2274576ca1d60b158e56b2026e7941\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6dd2b58b48e95a10166648b4a5b2d5fe794454b349c9a4371f03944e8d05c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a9da1e91fcfb0a8f575085727ab77304a0a3296d8602919052ff1eced379e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4d7dfddee4e3dbaf6c0cbfc31750b4fb8e18803f2f169df4181adfe21994294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0e6b170125c02f858c7340567fabc121271933d6f2422f7719bb2ac0f4b18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5cf05ff1780f6bbb072fe25727e7c62db4ec3b2b38e335a1f53b123808a0204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1e11df919486193b9d20522a01a794d8501ae7696cae17d1bbbc83164062124\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:39 crc kubenswrapper[4857]: I0318 14:02:39.999087 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:39Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.014794 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.029982 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.044629 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.057810 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.070108 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b115eb6c-2a12-4d60-b269-911a639d8eb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64ece16900a3c84add14e5746cbc3570734d861ac706a28982785ceea3b103d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6mxw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sjqg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.088471 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9391c2e-3dc6-4162-8148-71972b9c14d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74ed34b015f9e8c28a57c89a70b528da5312af57ecac69f4d717cd3081391158\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e03e7fbe03e7f7415b49bbf1df2452d0989f33e18a3a0fcb7e8575fd27f5655\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7716f4c16f81a6beb2280745510894b474daf8b6b63a4fb4af62aaf7dae58736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b431da03b17f28395f694fc7dd912ec49ff146ad5f1df77c907d31532520f8ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce55acfe44649b490d714065453271675d6d5e30d982bc286d0b0eec25e60b81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbb53b77d68f881422c8853aae1888fc0495747025b9a9050300db1235f28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19bf9930532d99fe8ead270cf542914feb1f5553f7143d7a560f7ec67fee3aa7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:01:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:01:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-mr7s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:40Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.162552 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.162582 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:40 crc kubenswrapper[4857]: E0318 14:02:40.162708 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.162827 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:40 crc kubenswrapper[4857]: E0318 14:02:40.162881 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:40 crc kubenswrapper[4857]: E0318 14:02:40.162985 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.163300 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:40 crc kubenswrapper[4857]: E0318 14:02:40.163640 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:40 crc kubenswrapper[4857]: I0318 14:02:40.794273 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/3.log" Mar 18 14:02:42 crc kubenswrapper[4857]: I0318 14:02:42.162620 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:42 crc kubenswrapper[4857]: I0318 14:02:42.162677 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:42 crc kubenswrapper[4857]: I0318 14:02:42.162655 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:42 crc kubenswrapper[4857]: I0318 14:02:42.162696 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:42 crc kubenswrapper[4857]: E0318 14:02:42.162856 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:42 crc kubenswrapper[4857]: E0318 14:02:42.162888 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:42 crc kubenswrapper[4857]: E0318 14:02:42.163059 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:42 crc kubenswrapper[4857]: E0318 14:02:42.163184 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:42 crc kubenswrapper[4857]: E0318 14:02:42.608091 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:44 crc kubenswrapper[4857]: I0318 14:02:44.163217 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:44 crc kubenswrapper[4857]: I0318 14:02:44.163262 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:44 crc kubenswrapper[4857]: I0318 14:02:44.163262 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:44 crc kubenswrapper[4857]: E0318 14:02:44.163368 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:44 crc kubenswrapper[4857]: E0318 14:02:44.163480 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:44 crc kubenswrapper[4857]: E0318 14:02:44.163606 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:44 crc kubenswrapper[4857]: I0318 14:02:44.163875 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:44 crc kubenswrapper[4857]: E0318 14:02:44.163951 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:46 crc kubenswrapper[4857]: I0318 14:02:46.162939 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:46 crc kubenswrapper[4857]: I0318 14:02:46.162999 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:46 crc kubenswrapper[4857]: I0318 14:02:46.162935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:46 crc kubenswrapper[4857]: E0318 14:02:46.163095 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:46 crc kubenswrapper[4857]: I0318 14:02:46.162935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:46 crc kubenswrapper[4857]: E0318 14:02:46.163378 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:46 crc kubenswrapper[4857]: E0318 14:02:46.163541 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:46 crc kubenswrapper[4857]: E0318 14:02:46.163611 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.180208 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d382d4-8fc1-48fd-96dd-6585b01285b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e19c6eb3f531b75d0c28931756c3d5d9fb2fe0023fd101c48c40b2cb8a66a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a0e734499284966276b3af89174bfd58a39921758805727008e397affa5207\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.193999 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fefc81-eb3a-4e0f-b0e3-56dcfef38acd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9ab44998d4e6875f82991d3c9fd50c468bc564f6a2d6e4e83dc7b83364eb6e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7c95f12961082e702956318404241e35175cd76ef51e0fcca15438aab75e9c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-18T14:00:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0318 14:00:20.393115 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0318 14:00:20.399214 1 observer_polling.go:159] Starting file observer\\\\nI0318 14:00:20.611195 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0318 14:00:20.629396 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0318 14:00:44.573549 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0318 14:00:44.573667 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:00:44Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86e226f2457052b279ce4d6ad5945c9150a0dec1ea2986788bf1b4df3a9c5d62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab3f049ad85ef40f6115c0b835d539754dc24a6c069df72b493fbdf2cf86eaf8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.207209 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26018c31-87ec-4d41-a981-73eea90968b4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T14:00:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c5aac025b5938bffb109eae2bb44c9d39337cd557e14054809a9e3d83a2b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b194ef0701b7841abe22f593ca560db6b74383916e80591dc1c403d6b03534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-18T14:00:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c41cc3401df58827e69c7b511cde19076c445712b8900562aa4d2206fcd5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T14:00:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T14:00:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-18T14:00:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.223885 4857 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T14:01:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-18T14:02:47Z is after 2025-08-24T17:21:41Z" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.298218 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=51.298183388 podStartE2EDuration="51.298183388s" podCreationTimestamp="2026-03-18 14:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.29716416 +0000 UTC m=+151.426292627" watchObservedRunningTime="2026-03-18 14:02:47.298183388 +0000 UTC m=+151.427311855" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.315496 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mr7s9" podStartSLOduration=101.315469364 podStartE2EDuration="1m41.315469364s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.31492851 +0000 UTC m=+151.444056977" watchObservedRunningTime="2026-03-18 14:02:47.315469364 +0000 UTC m=+151.444597831" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.343395 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podStartSLOduration=102.343374944 podStartE2EDuration="1m42.343374944s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.343144298 +0000 UTC m=+151.472272765" watchObservedRunningTime="2026-03-18 14:02:47.343374944 +0000 UTC m=+151.472503401" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.364023 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dw9w7" podStartSLOduration=102.364001743 podStartE2EDuration="1m42.364001743s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.363914681 +0000 UTC m=+151.493043138" watchObservedRunningTime="2026-03-18 14:02:47.364001743 +0000 UTC m=+151.493130200" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.381121 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bdlm5" podStartSLOduration=101.381099265 podStartE2EDuration="1m41.381099265s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.379579603 +0000 UTC m=+151.508708070" watchObservedRunningTime="2026-03-18 14:02:47.381099265 +0000 UTC m=+151.510227712" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.399064 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.39904173 podStartE2EDuration="1m20.39904173s" podCreationTimestamp="2026-03-18 14:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.398163426 +0000 UTC m=+151.527291883" watchObservedRunningTime="2026-03-18 14:02:47.39904173 +0000 UTC m=+151.528170187" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.408615 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rp52k" podStartSLOduration=102.408596123 podStartE2EDuration="1m42.408596123s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.408363087 +0000 UTC m=+151.537491554" watchObservedRunningTime="2026-03-18 14:02:47.408596123 +0000 UTC m=+151.537724580" Mar 18 14:02:47 crc kubenswrapper[4857]: I0318 14:02:47.443679 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wvdxw" podStartSLOduration=101.44365559 podStartE2EDuration="1m41.44365559s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:47.443113806 +0000 UTC m=+151.572242283" watchObservedRunningTime="2026-03-18 14:02:47.44365559 +0000 UTC m=+151.572784047" Mar 18 14:02:47 crc kubenswrapper[4857]: E0318 14:02:47.609010 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:48 crc kubenswrapper[4857]: I0318 14:02:48.163831 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:48 crc kubenswrapper[4857]: I0318 14:02:48.163864 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:48 crc kubenswrapper[4857]: I0318 14:02:48.163928 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:48 crc kubenswrapper[4857]: I0318 14:02:48.163831 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:48 crc kubenswrapper[4857]: E0318 14:02:48.164004 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:48 crc kubenswrapper[4857]: E0318 14:02:48.164133 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:48 crc kubenswrapper[4857]: E0318 14:02:48.164274 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:48 crc kubenswrapper[4857]: E0318 14:02:48.164346 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.297253 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.297353 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.297381 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.297420 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.297446 4857 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-18T14:02:49Z","lastTransitionTime":"2026-03-18T14:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.354285 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g"] Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.354881 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.357545 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.358628 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.358804 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.359094 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.407599 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=59.407575305 podStartE2EDuration="59.407575305s" podCreationTimestamp="2026-03-18 14:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:49.407322308 +0000 UTC m=+153.536450765" watchObservedRunningTime="2026-03-18 14:02:49.407575305 +0000 UTC m=+153.536703782" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.427598 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=20.427575897 podStartE2EDuration="20.427575897s" podCreationTimestamp="2026-03-18 14:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:49.426711003 +0000 UTC m=+153.555839460" watchObservedRunningTime="2026-03-18 14:02:49.427575897 +0000 UTC m=+153.556704374" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.444074 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=32.444048381 podStartE2EDuration="32.444048381s" podCreationTimestamp="2026-03-18 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:49.4436382 +0000 UTC m=+153.572766667" watchObservedRunningTime="2026-03-18 14:02:49.444048381 +0000 UTC m=+153.573176838" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.453192 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.453536 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3456aada-0806-469f-b8bb-2418a87767da-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.453684 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3456aada-0806-469f-b8bb-2418a87767da-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.453928 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3456aada-0806-469f-b8bb-2418a87767da-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.454046 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.555706 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3456aada-0806-469f-b8bb-2418a87767da-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.555884 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.556006 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.556079 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.556153 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3456aada-0806-469f-b8bb-2418a87767da-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.556218 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3456aada-0806-469f-b8bb-2418a87767da-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.556264 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3456aada-0806-469f-b8bb-2418a87767da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.557252 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3456aada-0806-469f-b8bb-2418a87767da-service-ca\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.561231 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3456aada-0806-469f-b8bb-2418a87767da-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.576023 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3456aada-0806-469f-b8bb-2418a87767da-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-h8n8g\" (UID: \"3456aada-0806-469f-b8bb-2418a87767da\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.674062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" Mar 18 14:02:49 crc kubenswrapper[4857]: W0318 14:02:49.696039 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3456aada_0806_469f_b8bb_2418a87767da.slice/crio-ac13ae1159d90bd1553b5d757242df0070c6b950179545af5456e3a37a6b215b WatchSource:0}: Error finding container ac13ae1159d90bd1553b5d757242df0070c6b950179545af5456e3a37a6b215b: Status 404 returned error can't find the container with id ac13ae1159d90bd1553b5d757242df0070c6b950179545af5456e3a37a6b215b Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.831822 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" event={"ID":"3456aada-0806-469f-b8bb-2418a87767da","Type":"ContainerStarted","Data":"bd77fd108f4d0d5e29e46c2ae9acfb2c6ef32b31f8643158c90413129b41a4d1"} Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.832376 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" event={"ID":"3456aada-0806-469f-b8bb-2418a87767da","Type":"ContainerStarted","Data":"ac13ae1159d90bd1553b5d757242df0070c6b950179545af5456e3a37a6b215b"} Mar 18 14:02:49 crc kubenswrapper[4857]: I0318 14:02:49.850100 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-h8n8g" podStartSLOduration=104.850077612 podStartE2EDuration="1m44.850077612s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:02:49.849588238 +0000 UTC m=+153.978716715" watchObservedRunningTime="2026-03-18 14:02:49.850077612 +0000 UTC m=+153.979206069" Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.066237 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.075083 4857 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.162935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.163005 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.163012 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.163118 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:50 crc kubenswrapper[4857]: E0318 14:02:50.163157 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:50 crc kubenswrapper[4857]: E0318 14:02:50.163293 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:50 crc kubenswrapper[4857]: E0318 14:02:50.163423 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:50 crc kubenswrapper[4857]: E0318 14:02:50.163527 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:50 crc kubenswrapper[4857]: I0318 14:02:50.165105 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:02:50 crc kubenswrapper[4857]: E0318 14:02:50.165395 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:02:52 crc kubenswrapper[4857]: I0318 14:02:52.163102 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:52 crc kubenswrapper[4857]: I0318 14:02:52.163188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:52 crc kubenswrapper[4857]: I0318 14:02:52.163235 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:52 crc kubenswrapper[4857]: I0318 14:02:52.163265 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:52 crc kubenswrapper[4857]: E0318 14:02:52.163253 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:52 crc kubenswrapper[4857]: E0318 14:02:52.163335 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:52 crc kubenswrapper[4857]: E0318 14:02:52.163419 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:52 crc kubenswrapper[4857]: E0318 14:02:52.163476 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:52 crc kubenswrapper[4857]: E0318 14:02:52.609890 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:54 crc kubenswrapper[4857]: I0318 14:02:54.162841 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:54 crc kubenswrapper[4857]: I0318 14:02:54.162859 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:54 crc kubenswrapper[4857]: I0318 14:02:54.162873 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:54 crc kubenswrapper[4857]: I0318 14:02:54.163021 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:54 crc kubenswrapper[4857]: E0318 14:02:54.163801 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:54 crc kubenswrapper[4857]: E0318 14:02:54.163867 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:54 crc kubenswrapper[4857]: E0318 14:02:54.163706 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:54 crc kubenswrapper[4857]: E0318 14:02:54.164016 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:56 crc kubenswrapper[4857]: I0318 14:02:56.162935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:56 crc kubenswrapper[4857]: I0318 14:02:56.163054 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:56 crc kubenswrapper[4857]: E0318 14:02:56.163124 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:56 crc kubenswrapper[4857]: E0318 14:02:56.163298 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:56 crc kubenswrapper[4857]: I0318 14:02:56.163334 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:56 crc kubenswrapper[4857]: E0318 14:02:56.163449 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:02:56 crc kubenswrapper[4857]: I0318 14:02:56.163464 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:56 crc kubenswrapper[4857]: E0318 14:02:56.163674 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:57 crc kubenswrapper[4857]: E0318 14:02:57.610961 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:02:58 crc kubenswrapper[4857]: I0318 14:02:58.163616 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:02:58 crc kubenswrapper[4857]: I0318 14:02:58.163658 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:02:58 crc kubenswrapper[4857]: I0318 14:02:58.163631 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:02:58 crc kubenswrapper[4857]: I0318 14:02:58.163616 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:02:58 crc kubenswrapper[4857]: E0318 14:02:58.163823 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:02:58 crc kubenswrapper[4857]: E0318 14:02:58.163994 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:02:58 crc kubenswrapper[4857]: E0318 14:02:58.164113 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:02:58 crc kubenswrapper[4857]: E0318 14:02:58.164219 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:00 crc kubenswrapper[4857]: I0318 14:03:00.162855 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:00 crc kubenswrapper[4857]: I0318 14:03:00.162921 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:00 crc kubenswrapper[4857]: I0318 14:03:00.162889 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:00 crc kubenswrapper[4857]: I0318 14:03:00.162889 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:00 crc kubenswrapper[4857]: E0318 14:03:00.163009 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:00 crc kubenswrapper[4857]: E0318 14:03:00.163173 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:00 crc kubenswrapper[4857]: E0318 14:03:00.163191 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:00 crc kubenswrapper[4857]: E0318 14:03:00.163250 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:02 crc kubenswrapper[4857]: I0318 14:03:02.163374 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:02 crc kubenswrapper[4857]: I0318 14:03:02.163370 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:02 crc kubenswrapper[4857]: I0318 14:03:02.163390 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:02 crc kubenswrapper[4857]: E0318 14:03:02.163939 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:02 crc kubenswrapper[4857]: E0318 14:03:02.163621 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:02 crc kubenswrapper[4857]: E0318 14:03:02.164165 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:02 crc kubenswrapper[4857]: I0318 14:03:02.163420 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:02 crc kubenswrapper[4857]: E0318 14:03:02.164291 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:02 crc kubenswrapper[4857]: E0318 14:03:02.612257 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:03 crc kubenswrapper[4857]: I0318 14:03:03.164195 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:03:03 crc kubenswrapper[4857]: E0318 14:03:03.164406 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:03:04 crc kubenswrapper[4857]: I0318 14:03:04.163336 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:04 crc kubenswrapper[4857]: I0318 14:03:04.163414 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:04 crc kubenswrapper[4857]: I0318 14:03:04.163429 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:04 crc kubenswrapper[4857]: E0318 14:03:04.163477 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:04 crc kubenswrapper[4857]: I0318 14:03:04.163494 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:04 crc kubenswrapper[4857]: E0318 14:03:04.163732 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:04 crc kubenswrapper[4857]: E0318 14:03:04.163828 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:04 crc kubenswrapper[4857]: E0318 14:03:04.164151 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:06 crc kubenswrapper[4857]: I0318 14:03:06.163371 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:06 crc kubenswrapper[4857]: I0318 14:03:06.163431 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:06 crc kubenswrapper[4857]: I0318 14:03:06.163431 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:06 crc kubenswrapper[4857]: I0318 14:03:06.163519 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:06 crc kubenswrapper[4857]: E0318 14:03:06.163525 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:06 crc kubenswrapper[4857]: E0318 14:03:06.163619 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:06 crc kubenswrapper[4857]: E0318 14:03:06.163677 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:06 crc kubenswrapper[4857]: E0318 14:03:06.163696 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:07 crc kubenswrapper[4857]: E0318 14:03:07.613719 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:08 crc kubenswrapper[4857]: I0318 14:03:08.162914 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:08 crc kubenswrapper[4857]: I0318 14:03:08.162914 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:08 crc kubenswrapper[4857]: I0318 14:03:08.163005 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:08 crc kubenswrapper[4857]: I0318 14:03:08.163128 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:08 crc kubenswrapper[4857]: E0318 14:03:08.163217 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:08 crc kubenswrapper[4857]: E0318 14:03:08.163314 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:08 crc kubenswrapper[4857]: E0318 14:03:08.163413 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:08 crc kubenswrapper[4857]: E0318 14:03:08.163476 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:10 crc kubenswrapper[4857]: I0318 14:03:10.163537 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:10 crc kubenswrapper[4857]: I0318 14:03:10.163625 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:10 crc kubenswrapper[4857]: E0318 14:03:10.163937 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:10 crc kubenswrapper[4857]: I0318 14:03:10.163657 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:10 crc kubenswrapper[4857]: E0318 14:03:10.163993 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:10 crc kubenswrapper[4857]: I0318 14:03:10.163640 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:10 crc kubenswrapper[4857]: E0318 14:03:10.164073 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:10 crc kubenswrapper[4857]: E0318 14:03:10.164182 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:12 crc kubenswrapper[4857]: I0318 14:03:12.163451 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:12 crc kubenswrapper[4857]: I0318 14:03:12.163496 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:12 crc kubenswrapper[4857]: I0318 14:03:12.163532 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:12 crc kubenswrapper[4857]: I0318 14:03:12.163536 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:12 crc kubenswrapper[4857]: E0318 14:03:12.163625 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:12 crc kubenswrapper[4857]: E0318 14:03:12.163741 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:12 crc kubenswrapper[4857]: E0318 14:03:12.163879 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:12 crc kubenswrapper[4857]: E0318 14:03:12.163940 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:12 crc kubenswrapper[4857]: E0318 14:03:12.615393 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:14 crc kubenswrapper[4857]: I0318 14:03:14.162829 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:14 crc kubenswrapper[4857]: I0318 14:03:14.162903 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:14 crc kubenswrapper[4857]: I0318 14:03:14.162911 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:14 crc kubenswrapper[4857]: I0318 14:03:14.162999 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:14 crc kubenswrapper[4857]: E0318 14:03:14.162988 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:14 crc kubenswrapper[4857]: E0318 14:03:14.163068 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:14 crc kubenswrapper[4857]: E0318 14:03:14.163168 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:14 crc kubenswrapper[4857]: E0318 14:03:14.163579 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:15 crc kubenswrapper[4857]: I0318 14:03:15.164109 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:03:15 crc kubenswrapper[4857]: E0318 14:03:15.164363 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bpx9l_openshift-ovn-kubernetes(5bdcb274-14da-4683-8c0a-0b71e2d2a16f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" Mar 18 14:03:16 crc kubenswrapper[4857]: I0318 14:03:16.163250 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:16 crc kubenswrapper[4857]: I0318 14:03:16.163381 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:16 crc kubenswrapper[4857]: E0318 14:03:16.163405 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:16 crc kubenswrapper[4857]: I0318 14:03:16.163250 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:16 crc kubenswrapper[4857]: I0318 14:03:16.163274 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:16 crc kubenswrapper[4857]: E0318 14:03:16.163550 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:16 crc kubenswrapper[4857]: E0318 14:03:16.163667 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:16 crc kubenswrapper[4857]: E0318 14:03:16.163707 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:17 crc kubenswrapper[4857]: E0318 14:03:17.617236 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:18 crc kubenswrapper[4857]: I0318 14:03:18.163318 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:18 crc kubenswrapper[4857]: I0318 14:03:18.163347 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:18 crc kubenswrapper[4857]: I0318 14:03:18.163421 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:18 crc kubenswrapper[4857]: E0318 14:03:18.163565 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:18 crc kubenswrapper[4857]: E0318 14:03:18.163628 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:18 crc kubenswrapper[4857]: E0318 14:03:18.163718 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:18 crc kubenswrapper[4857]: I0318 14:03:18.164034 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:18 crc kubenswrapper[4857]: E0318 14:03:18.164112 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.163205 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.163272 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.163311 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.163419 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:20 crc kubenswrapper[4857]: E0318 14:03:20.163416 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:20 crc kubenswrapper[4857]: E0318 14:03:20.163655 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:20 crc kubenswrapper[4857]: E0318 14:03:20.163734 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:20 crc kubenswrapper[4857]: E0318 14:03:20.163970 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.941258 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/1.log" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.941845 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/0.log" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.941885 4857 generic.go:334] "Generic (PLEG): container finished" podID="0ca53fe8-513c-4226-8659-208b304ffb78" containerID="b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee" exitCode=1 Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.941917 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerDied","Data":"b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee"} Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.941953 4857 scope.go:117] "RemoveContainer" containerID="7b053f460610136067eb728f3233d4a737044d697423e46521442326ffceab4b" Mar 18 14:03:20 crc kubenswrapper[4857]: I0318 14:03:20.942284 4857 scope.go:117] "RemoveContainer" containerID="b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee" Mar 18 14:03:20 crc kubenswrapper[4857]: E0318 14:03:20.942421 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bdlm5_openshift-multus(0ca53fe8-513c-4226-8659-208b304ffb78)\"" pod="openshift-multus/multus-bdlm5" podUID="0ca53fe8-513c-4226-8659-208b304ffb78" Mar 18 14:03:21 crc kubenswrapper[4857]: I0318 14:03:21.946904 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/1.log" Mar 18 14:03:22 crc kubenswrapper[4857]: I0318 14:03:22.163234 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:22 crc kubenswrapper[4857]: I0318 14:03:22.163280 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:22 crc kubenswrapper[4857]: I0318 14:03:22.163251 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:22 crc kubenswrapper[4857]: I0318 14:03:22.163396 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:22 crc kubenswrapper[4857]: E0318 14:03:22.163534 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:22 crc kubenswrapper[4857]: E0318 14:03:22.163410 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:22 crc kubenswrapper[4857]: E0318 14:03:22.163668 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:22 crc kubenswrapper[4857]: E0318 14:03:22.163736 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:22 crc kubenswrapper[4857]: E0318 14:03:22.619294 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:24 crc kubenswrapper[4857]: I0318 14:03:24.163568 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:24 crc kubenswrapper[4857]: I0318 14:03:24.163620 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:24 crc kubenswrapper[4857]: I0318 14:03:24.163719 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:24 crc kubenswrapper[4857]: E0318 14:03:24.163718 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:24 crc kubenswrapper[4857]: I0318 14:03:24.163806 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:24 crc kubenswrapper[4857]: E0318 14:03:24.164006 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:24 crc kubenswrapper[4857]: E0318 14:03:24.164181 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:24 crc kubenswrapper[4857]: E0318 14:03:24.164284 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:26 crc kubenswrapper[4857]: I0318 14:03:26.162737 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:26 crc kubenswrapper[4857]: I0318 14:03:26.162788 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:26 crc kubenswrapper[4857]: I0318 14:03:26.162788 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:26 crc kubenswrapper[4857]: E0318 14:03:26.162903 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:26 crc kubenswrapper[4857]: I0318 14:03:26.162966 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:26 crc kubenswrapper[4857]: E0318 14:03:26.163057 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:26 crc kubenswrapper[4857]: E0318 14:03:26.163133 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:26 crc kubenswrapper[4857]: E0318 14:03:26.163161 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:27 crc kubenswrapper[4857]: E0318 14:03:27.620796 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.162999 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.163153 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.163220 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.163287 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:28 crc kubenswrapper[4857]: E0318 14:03:28.163931 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:28 crc kubenswrapper[4857]: E0318 14:03:28.164703 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.164807 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:03:28 crc kubenswrapper[4857]: E0318 14:03:28.164824 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:28 crc kubenswrapper[4857]: E0318 14:03:28.164641 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.974904 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/3.log" Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.977871 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerStarted","Data":"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c"} Mar 18 14:03:28 crc kubenswrapper[4857]: I0318 14:03:28.978336 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:03:29 crc kubenswrapper[4857]: I0318 14:03:29.005425 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podStartSLOduration=143.005403317 podStartE2EDuration="2m23.005403317s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:29.005075158 +0000 UTC m=+193.134203615" watchObservedRunningTime="2026-03-18 14:03:29.005403317 +0000 UTC m=+193.134531774" Mar 18 14:03:29 crc kubenswrapper[4857]: I0318 14:03:29.273771 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f7vgs"] Mar 18 14:03:29 crc kubenswrapper[4857]: I0318 14:03:29.273891 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:29 crc kubenswrapper[4857]: E0318 14:03:29.273998 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:30 crc kubenswrapper[4857]: I0318 14:03:30.162722 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:30 crc kubenswrapper[4857]: I0318 14:03:30.162746 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:30 crc kubenswrapper[4857]: E0318 14:03:30.163213 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:30 crc kubenswrapper[4857]: I0318 14:03:30.162787 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:30 crc kubenswrapper[4857]: E0318 14:03:30.163290 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:30 crc kubenswrapper[4857]: E0318 14:03:30.163371 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:31 crc kubenswrapper[4857]: I0318 14:03:31.163509 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:31 crc kubenswrapper[4857]: E0318 14:03:31.163837 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.163048 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.163113 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.163322 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:32 crc kubenswrapper[4857]: E0318 14:03:32.163447 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:32 crc kubenswrapper[4857]: E0318 14:03:32.163656 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:32 crc kubenswrapper[4857]: E0318 14:03:32.163692 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.163905 4857 scope.go:117] "RemoveContainer" containerID="b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee" Mar 18 14:03:32 crc kubenswrapper[4857]: E0318 14:03:32.622351 4857 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.997216 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/1.log" Mar 18 14:03:32 crc kubenswrapper[4857]: I0318 14:03:32.997318 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerStarted","Data":"45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31"} Mar 18 14:03:33 crc kubenswrapper[4857]: I0318 14:03:33.163776 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:33 crc kubenswrapper[4857]: E0318 14:03:33.164054 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.163421 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.163496 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.163916 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.164067 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.163512 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.164170 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.575863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.576054 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.576123 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:05:36.576092579 +0000 UTC m=+320.705221036 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.576186 4857 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.576273 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:05:36.576240104 +0000 UTC m=+320.705368631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.677013 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.677060 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:34 crc kubenswrapper[4857]: I0318 14:03:34.677124 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677260 4857 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677272 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677318 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677332 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-18 14:05:36.677311721 +0000 UTC m=+320.806440178 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677335 4857 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677352 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677410 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-18 14:05:36.677389553 +0000 UTC m=+320.806518070 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677453 4857 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677470 4857 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:03:34 crc kubenswrapper[4857]: E0318 14:03:34.677541 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-18 14:05:36.677519227 +0000 UTC m=+320.806647684 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 14:03:35 crc kubenswrapper[4857]: I0318 14:03:35.163145 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:35 crc kubenswrapper[4857]: E0318 14:03:35.163295 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:36 crc kubenswrapper[4857]: I0318 14:03:36.163426 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:36 crc kubenswrapper[4857]: I0318 14:03:36.163549 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:36 crc kubenswrapper[4857]: E0318 14:03:36.163591 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 18 14:03:36 crc kubenswrapper[4857]: E0318 14:03:36.163713 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 18 14:03:36 crc kubenswrapper[4857]: I0318 14:03:36.163808 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:36 crc kubenswrapper[4857]: E0318 14:03:36.163863 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 18 14:03:37 crc kubenswrapper[4857]: I0318 14:03:37.163038 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:37 crc kubenswrapper[4857]: E0318 14:03:37.164514 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f7vgs" podUID="eb942ab9-842d-4078-9789-2fe1788b4dfb" Mar 18 14:03:37 crc kubenswrapper[4857]: I0318 14:03:37.500989 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:37 crc kubenswrapper[4857]: E0318 14:03:37.501202 4857 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:03:37 crc kubenswrapper[4857]: E0318 14:03:37.501310 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs podName:eb942ab9-842d-4078-9789-2fe1788b4dfb nodeName:}" failed. No retries permitted until 2026-03-18 14:05:39.501290152 +0000 UTC m=+323.630418599 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs") pod "network-metrics-daemon-f7vgs" (UID: "eb942ab9-842d-4078-9789-2fe1788b4dfb") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.163390 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.163465 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.163416 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.166034 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.166393 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.166482 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 14:03:38 crc kubenswrapper[4857]: I0318 14:03:38.166871 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 14:03:39 crc kubenswrapper[4857]: I0318 14:03:39.163245 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:03:39 crc kubenswrapper[4857]: I0318 14:03:39.167333 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 18 14:03:39 crc kubenswrapper[4857]: I0318 14:03:39.167985 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.143346 4857 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.226200 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr84c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.226900 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.230471 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.230498 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.230658 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.237516 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k6kp8"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.238042 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256218 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256395 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256505 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256524 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256647 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256738 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256696 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.256870 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.257059 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.257567 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.257628 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.257898 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.258559 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.260163 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.260031 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.261258 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.265825 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rwkm"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.266224 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.266447 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.266631 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.267041 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fr8cx"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.267085 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.267385 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.267409 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.267695 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xsbrw"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.268125 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.268191 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.268194 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.268745 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.269543 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.269786 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.270199 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.270364 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.274481 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.274722 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.274943 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.275048 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.275060 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.275160 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.275997 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.276442 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.276505 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.276599 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.276722 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.277017 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.279683 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.280345 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.284181 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gvkpz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.284565 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xwln7"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.284951 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4cprr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.285078 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.281524 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.280956 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.281717 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.284947 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.281778 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.284946 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.285983 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.286408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.293925 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294056 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294200 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294371 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294419 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294571 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294859 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.294941 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.307045 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.307396 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.307644 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.308252 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.308690 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.309292 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.309683 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.310026 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.310392 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.314280 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.314581 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.315125 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.315225 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.315454 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.316431 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.316610 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.317356 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.317925 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.318218 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.318309 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.318608 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.318734 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.318866 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.319179 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.320686 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.320969 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.321062 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.321342 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.329151 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.330989 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.331580 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.332898 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564042-j5cmc"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.334900 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.335318 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.335923 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.336677 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.336964 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.340925 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.341509 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.341797 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.341891 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342125 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.341436 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342198 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342502 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342555 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342616 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.345560 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d82f58-0677-4450-baff-d3620aa86b32-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346633 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346668 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xbf\" (UniqueName: \"kubernetes.io/projected/0cbc9065-8609-4637-958c-805de5c08411-kube-api-access-v8xbf\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346691 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346708 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-config\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346737 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-serving-cert\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346786 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mskc7\" (UniqueName: \"kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346810 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346849 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346875 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99p5l\" (UniqueName: \"kubernetes.io/projected/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-kube-api-access-99p5l\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346897 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346925 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jczrb\" (UniqueName: \"kubernetes.io/projected/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-kube-api-access-jczrb\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346973 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-serving-cert\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346998 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347021 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347041 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-image-import-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347066 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-etcd-client\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347088 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347109 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347133 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347157 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gmx\" (UniqueName: \"kubernetes.io/projected/e427c5bb-ebf9-4836-8a31-9968569fbe48-kube-api-access-64gmx\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347198 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347222 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347240 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmp6j\" (UniqueName: \"kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347259 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347279 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-service-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-encryption-config\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347321 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvw2n\" (UniqueName: \"kubernetes.io/projected/d4300327-af6f-4261-8973-ef640d24993f-kube-api-access-zvw2n\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347340 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347359 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-config\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347393 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347414 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzp8\" (UniqueName: \"kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8\") pod \"auto-csr-approver-29564042-j5cmc\" (UID: \"287df787-86a7-4a56-b5a1-fb55b6bed91b\") " pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347436 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347457 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-config\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347479 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-client\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347500 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e427c5bb-ebf9-4836-8a31-9968569fbe48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347520 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347544 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-trusted-ca\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347566 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347586 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347609 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmbvw\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-kube-api-access-wmbvw\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347631 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-serving-cert\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347651 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347669 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-client\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347689 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347720 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-serving-cert\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.342682 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343302 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.346572 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.351266 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343370 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343417 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343456 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343491 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343511 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343604 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343627 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343676 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343769 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.343915 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.344004 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.344005 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.345186 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.345236 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.352594 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.345269 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.345504 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.349126 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.353138 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.355810 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.356563 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.360692 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v6brz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.364358 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.365018 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.365417 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k6kp8"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.365547 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.347742 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.365989 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0cbc9065-8609-4637-958c-805de5c08411-machine-approver-tls\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366020 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit-dir\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366052 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26stn\" (UniqueName: \"kubernetes.io/projected/85980a8c-19a9-4b94-8d91-a7fdbad22cab-kube-api-access-26stn\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366069 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366087 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppvt\" (UniqueName: \"kubernetes.io/projected/ef638f17-5999-467e-b170-8ef20068e451-kube-api-access-fppvt\") pod \"downloads-7954f5f757-gvkpz\" (UID: \"ef638f17-5999-467e-b170-8ef20068e451\") " pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366102 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366121 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-images\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366137 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-node-pullsecrets\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366160 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-serving-cert\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366179 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgd86\" (UniqueName: \"kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366199 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-auth-proxy-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366224 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366248 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-service-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366266 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4224v\" (UniqueName: \"kubernetes.io/projected/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-kube-api-access-4224v\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366301 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366319 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366336 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-encryption-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366358 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366529 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366803 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366829 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366847 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbsh8\" (UniqueName: \"kubernetes.io/projected/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-kube-api-access-mbsh8\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366863 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366879 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d82f58-0677-4450-baff-d3620aa86b32-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366897 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366925 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366941 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366962 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.366984 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367004 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcf6v\" (UniqueName: \"kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367019 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkl5w\" (UniqueName: \"kubernetes.io/projected/096c78f1-127f-4281-81b4-22ff1fd40e04-kube-api-access-pkl5w\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367035 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a03aa3-6810-477f-8f45-79abe51d7d7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367051 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a03aa3-6810-477f-8f45-79abe51d7d7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367070 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-config\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367085 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367106 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4300327-af6f-4261-8973-ef640d24993f-audit-dir\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367122 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4slpv\" (UniqueName: \"kubernetes.io/projected/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-kube-api-access-4slpv\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367137 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-metrics-tls\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367156 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-audit-policies\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367170 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367188 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367202 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096c78f1-127f-4281-81b4-22ff1fd40e04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367221 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmnm\" (UniqueName: \"kubernetes.io/projected/04d82f58-0677-4450-baff-d3620aa86b32-kube-api-access-bgmnm\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.367238 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2x9s\" (UniqueName: \"kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.368336 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.368881 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.369785 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.375895 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.377488 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.377775 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.379726 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.383377 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.396225 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.399637 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.400103 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.405677 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.406982 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.408724 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.412283 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.416723 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.420725 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.421779 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.422662 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.423498 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.424213 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.426225 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.426544 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.427859 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.428017 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.428501 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wp82x"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.428639 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.429385 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.429671 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.429714 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.430834 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.431007 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.433291 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rwkm"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.433331 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr84c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.433401 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.434338 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.435565 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.436408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.437047 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.438398 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.439692 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.440971 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gvkpz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.442109 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.443353 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fr8cx"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.444964 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-r69mf"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.445882 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.446311 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.447785 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.449671 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4cprr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.449803 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.450638 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.451960 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.453214 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564042-j5cmc"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.454293 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.455337 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.456391 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.458240 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.459985 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xsbrw"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.460893 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.463353 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.464665 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.466132 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467358 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467806 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467834 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-serving-cert\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467856 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgd86\" (UniqueName: \"kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467878 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-auth-proxy-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467898 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-service-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467915 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467932 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4224v\" (UniqueName: \"kubernetes.io/projected/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-kube-api-access-4224v\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467957 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467978 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-encryption-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.467997 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.468013 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.468870 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hk5gs"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469230 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469381 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469420 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbsh8\" (UniqueName: \"kubernetes.io/projected/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-kube-api-access-mbsh8\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469440 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469455 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d82f58-0677-4450-baff-d3620aa86b32-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469477 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469494 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469515 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469533 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469551 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469572 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcf6v\" (UniqueName: \"kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469590 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkl5w\" (UniqueName: \"kubernetes.io/projected/096c78f1-127f-4281-81b4-22ff1fd40e04-kube-api-access-pkl5w\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469609 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a03aa3-6810-477f-8f45-79abe51d7d7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469627 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a03aa3-6810-477f-8f45-79abe51d7d7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469643 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469662 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-config\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469678 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4300327-af6f-4261-8973-ef640d24993f-audit-dir\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469693 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4slpv\" (UniqueName: \"kubernetes.io/projected/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-kube-api-access-4slpv\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469710 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469724 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-metrics-tls\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469745 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-audit-policies\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469778 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469793 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096c78f1-127f-4281-81b4-22ff1fd40e04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469812 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgmnm\" (UniqueName: \"kubernetes.io/projected/04d82f58-0677-4450-baff-d3620aa86b32-kube-api-access-bgmnm\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469830 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2x9s\" (UniqueName: \"kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469872 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d82f58-0677-4450-baff-d3620aa86b32-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469895 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-serving-cert\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469920 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469942 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8xbf\" (UniqueName: \"kubernetes.io/projected/0cbc9065-8609-4637-958c-805de5c08411-kube-api-access-v8xbf\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469959 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-config\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469976 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mskc7\" (UniqueName: \"kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.469994 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470017 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470035 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99p5l\" (UniqueName: \"kubernetes.io/projected/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-kube-api-access-99p5l\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470082 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470105 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jczrb\" (UniqueName: \"kubernetes.io/projected/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-kube-api-access-jczrb\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470125 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-serving-cert\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470192 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-image-import-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470241 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470284 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-etcd-client\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470316 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470367 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470403 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470435 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gmx\" (UniqueName: \"kubernetes.io/projected/e427c5bb-ebf9-4836-8a31-9968569fbe48-kube-api-access-64gmx\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470515 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470549 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470576 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470598 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmp6j\" (UniqueName: \"kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470615 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-config\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470643 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-service-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470667 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-encryption-config\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470685 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvw2n\" (UniqueName: \"kubernetes.io/projected/d4300327-af6f-4261-8973-ef640d24993f-kube-api-access-zvw2n\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470707 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470742 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470789 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e427c5bb-ebf9-4836-8a31-9968569fbe48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470810 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzp8\" (UniqueName: \"kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8\") pod \"auto-csr-approver-29564042-j5cmc\" (UID: \"287df787-86a7-4a56-b5a1-fb55b6bed91b\") " pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470838 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470857 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-config\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-client\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470906 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470934 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-trusted-ca\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470938 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470953 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmbvw\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-kube-api-access-wmbvw\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471296 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471345 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471427 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471481 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-serving-cert\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471519 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471560 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-client\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471584 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471622 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-serving-cert\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471670 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471711 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0cbc9065-8609-4637-958c-805de5c08411-machine-approver-tls\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471780 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471823 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit-dir\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471868 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26stn\" (UniqueName: \"kubernetes.io/projected/85980a8c-19a9-4b94-8d91-a7fdbad22cab-kube-api-access-26stn\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.471721 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.473685 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.470241 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d82f58-0677-4450-baff-d3620aa86b32-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.474616 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475378 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-serving-cert\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475398 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-serving-cert\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475735 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-auth-proxy-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475905 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppvt\" (UniqueName: \"kubernetes.io/projected/ef638f17-5999-467e-b170-8ef20068e451-kube-api-access-fppvt\") pod \"downloads-7954f5f757-gvkpz\" (UID: \"ef638f17-5999-467e-b170-8ef20068e451\") " pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475967 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.476294 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-encryption-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.477199 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.475997 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-images\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.486801 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-node-pullsecrets\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.486941 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r69mf"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.487923 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-available-featuregates\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.487991 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.476479 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.488394 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d4300327-af6f-4261-8973-ef640d24993f-audit-dir\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.489486 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.489628 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.489879 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-config\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.490400 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-image-import-ca\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.490184 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.490457 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-audit-dir\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491047 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbc9065-8609-4637-958c-805de5c08411-config\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491309 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d4300327-af6f-4261-8973-ef640d24993f-audit-policies\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491372 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-config\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491404 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-trusted-ca\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491615 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.491790 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.492116 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-config\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.493205 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-etcd-client\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.493788 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.494659 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.494841 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-metrics-tls\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.495381 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.495846 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5a03aa3-6810-477f-8f45-79abe51d7d7e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.496373 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v6brz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.496582 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-etcd-client\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.497201 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-config\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.497361 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.497538 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.497542 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.498024 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04d82f58-0677-4450-baff-d3620aa86b32-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.498342 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0cbc9065-8609-4637-958c-805de5c08411-machine-approver-tls\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.498810 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f5a03aa3-6810-477f-8f45-79abe51d7d7e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.500164 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.500713 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-config\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.501008 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.501496 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.501839 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-node-pullsecrets\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.501980 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e427c5bb-ebf9-4836-8a31-9968569fbe48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.502942 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.502967 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-serving-cert\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.503355 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d4300327-af6f-4261-8973-ef640d24993f-encryption-config\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.503394 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-serving-cert\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.504309 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-service-ca\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.505206 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.505382 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.505875 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/096c78f1-127f-4281-81b4-22ff1fd40e04-images\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.507254 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.510873 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.514013 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jkjbk"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.514324 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-etcd-client\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.514325 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.513814 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.515278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.515675 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85980a8c-19a9-4b94-8d91-a7fdbad22cab-serving-cert\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.516907 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.517836 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.518336 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-serving-cert\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.522049 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/096c78f1-127f-4281-81b4-22ff1fd40e04-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.524357 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.524526 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.526383 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.530285 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.529118 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.531970 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hk5gs"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.533524 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-service-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.533871 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.534444 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.535423 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.536935 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jkjbk"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.538329 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wp82x"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.539667 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mqx8m"] Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.540563 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.551055 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.569179 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.589720 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.609524 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.629068 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.638635 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.650282 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.675825 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.684122 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.689713 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.709166 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.729524 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.736553 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.750059 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.769309 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.778770 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.790223 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.809432 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.830130 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.831568 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.870737 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.890412 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.910352 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.930704 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.949946 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.970645 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 14:03:40 crc kubenswrapper[4857]: I0318 14:03:40.990206 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.009587 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.037030 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.049493 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.069697 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.089871 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.109374 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.129768 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.150632 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.169888 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.190305 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.209483 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.230253 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.249581 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.269217 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.290652 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.310147 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.330086 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.350100 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.387541 4857 request.go:700] Waited for 1.00875972s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.389921 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.409786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.429546 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.450350 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.469740 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.489895 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.509222 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.529953 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.549604 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.569560 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.589771 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.609315 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.629448 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.650008 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.669041 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.690039 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.709517 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.728987 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.750126 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.769905 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.789115 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.809738 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.829393 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.849403 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.870303 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.889315 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.910118 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.929645 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.949228 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.970158 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 14:03:41 crc kubenswrapper[4857]: I0318 14:03:41.989526 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.009060 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.029943 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.049869 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.069791 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.090247 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.109553 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.129444 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.165641 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgd86\" (UniqueName: \"kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86\") pod \"marketplace-operator-79b997595-kndt2\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.192399 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4224v\" (UniqueName: \"kubernetes.io/projected/e4e4af7c-f5d3-4b12-b419-70dbae8cab23-kube-api-access-4224v\") pod \"authentication-operator-69f744f599-4cprr\" (UID: \"e4e4af7c-f5d3-4b12-b419-70dbae8cab23\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.209520 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.211253 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jczrb\" (UniqueName: \"kubernetes.io/projected/2e10ef1d-7c47-45d3-b16d-1ac7adccadbd-kube-api-access-jczrb\") pod \"console-operator-58897d9998-k6kp8\" (UID: \"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd\") " pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.229705 4857 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.250152 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.284085 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mskc7\" (UniqueName: \"kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7\") pod \"oauth-openshift-558db77b4-gxtb9\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.304685 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbsh8\" (UniqueName: \"kubernetes.io/projected/95c8f32c-92c3-41f9-b4ca-e7ca90c22845-kube-api-access-mbsh8\") pod \"dns-operator-744455d44c-xsbrw\" (UID: \"95c8f32c-92c3-41f9-b4ca-e7ca90c22845\") " pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.325458 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4slpv\" (UniqueName: \"kubernetes.io/projected/3cc72860-8bb3-4d9b-af72-7f2b1a270d30-kube-api-access-4slpv\") pod \"openshift-config-operator-7777fb866f-m2v2c\" (UID: \"3cc72860-8bb3-4d9b-af72-7f2b1a270d30\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.347424 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26stn\" (UniqueName: \"kubernetes.io/projected/85980a8c-19a9-4b94-8d91-a7fdbad22cab-kube-api-access-26stn\") pod \"etcd-operator-b45778765-fr8cx\" (UID: \"85980a8c-19a9-4b94-8d91-a7fdbad22cab\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.358208 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.366025 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.369682 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99p5l\" (UniqueName: \"kubernetes.io/projected/b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d-kube-api-access-99p5l\") pod \"apiserver-76f77b778f-qr84c\" (UID: \"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d\") " pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.380691 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.385452 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gmx\" (UniqueName: \"kubernetes.io/projected/e427c5bb-ebf9-4836-8a31-9968569fbe48-kube-api-access-64gmx\") pod \"cluster-samples-operator-665b6dd947-52cxv\" (UID: \"e427c5bb-ebf9-4836-8a31-9968569fbe48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.387666 4857 request.go:700] Waited for 1.894180092s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.389079 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.405437 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcf6v\" (UniqueName: \"kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v\") pod \"route-controller-manager-6576b87f9c-rd2fj\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.425695 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkl5w\" (UniqueName: \"kubernetes.io/projected/096c78f1-127f-4281-81b4-22ff1fd40e04-kube-api-access-pkl5w\") pod \"machine-api-operator-5694c8668f-5rwkm\" (UID: \"096c78f1-127f-4281-81b4-22ff1fd40e04\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.428308 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.436220 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.441208 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.447590 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2x9s\" (UniqueName: \"kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s\") pod \"controller-manager-879f6c89f-r5t7m\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.460046 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.471320 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8xbf\" (UniqueName: \"kubernetes.io/projected/0cbc9065-8609-4637-958c-805de5c08411-kube-api-access-v8xbf\") pod \"machine-approver-56656f9798-fkw6z\" (UID: \"0cbc9065-8609-4637-958c-805de5c08411\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.496162 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmp6j\" (UniqueName: \"kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j\") pod \"console-f9d7485db-4bqqp\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.513596 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.522366 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgmnm\" (UniqueName: \"kubernetes.io/projected/04d82f58-0677-4450-baff-d3620aa86b32-kube-api-access-bgmnm\") pod \"openshift-controller-manager-operator-756b6f6bc6-xf995\" (UID: \"04d82f58-0677-4450-baff-d3620aa86b32\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.528561 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.532958 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvw2n\" (UniqueName: \"kubernetes.io/projected/d4300327-af6f-4261-8973-ef640d24993f-kube-api-access-zvw2n\") pod \"apiserver-7bbb656c7d-dnrd6\" (UID: \"d4300327-af6f-4261-8973-ef640d24993f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.586331 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.587660 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.588009 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.667616 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.667916 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.668296 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.670344 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.670546 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.671537 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.673109 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmbvw\" (UniqueName: \"kubernetes.io/projected/f5a03aa3-6810-477f-8f45-79abe51d7d7e-kube-api-access-wmbvw\") pod \"cluster-image-registry-operator-dc59b4c8b-5bssd\" (UID: \"f5a03aa3-6810-477f-8f45-79abe51d7d7e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.673789 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppvt\" (UniqueName: \"kubernetes.io/projected/ef638f17-5999-467e-b170-8ef20068e451-kube-api-access-fppvt\") pod \"downloads-7954f5f757-gvkpz\" (UID: \"ef638f17-5999-467e-b170-8ef20068e451\") " pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.674469 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzp8\" (UniqueName: \"kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8\") pod \"auto-csr-approver-29564042-j5cmc\" (UID: \"287df787-86a7-4a56-b5a1-fb55b6bed91b\") " pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.694545 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.710219 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.713158 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.719272 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.751593 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k6kp8"] Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765119 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765175 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtrzz\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765209 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765230 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765266 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765283 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765328 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-default-certificate\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765347 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-service-ca-bundle\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765373 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-stats-auth\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765396 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-metrics-certs\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765413 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jx8\" (UniqueName: \"kubernetes.io/projected/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-kube-api-access-g4jx8\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.765443 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: E0318 14:03:42.765905 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.265891245 +0000 UTC m=+207.395019702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:42 crc kubenswrapper[4857]: W0318 14:03:42.776464 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e10ef1d_7c47_45d3_b16d_1ac7adccadbd.slice/crio-53cca3c1919e561765d21e6f2216deb8a17048dfb13b2b756e1d9f5ffed3f117 WatchSource:0}: Error finding container 53cca3c1919e561765d21e6f2216deb8a17048dfb13b2b756e1d9f5ffed3f117: Status 404 returned error can't find the container with id 53cca3c1919e561765d21e6f2216deb8a17048dfb13b2b756e1d9f5ffed3f117 Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.794625 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.866929 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.867602 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.867952 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-csi-data-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.868216 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4jx8\" (UniqueName: \"kubernetes.io/projected/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-kube-api-access-g4jx8\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.868376 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-plugins-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.868533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea4eb52-a889-4cec-8511-f1ef21cc732f-config\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.868731 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4eb52-a889-4cec-8511-f1ef21cc732f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.869201 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8a3a7a9-a253-480c-b074-485bc5768d8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.869375 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416713e-365f-40d0-b5b5-57e570feaf91-config\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.869557 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l2nh\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-kube-api-access-6l2nh\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.869719 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-registration-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.869881 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78d6q\" (UniqueName: \"kubernetes.io/projected/f7e89fbf-ede1-47f3-84dc-54b8471fa052-kube-api-access-78d6q\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.870149 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.870351 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-node-bootstrap-token\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.870515 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzk6\" (UniqueName: \"kubernetes.io/projected/a977ae9e-847e-402e-ba1f-b716811ee998-kube-api-access-2vzk6\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.870676 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jwb\" (UniqueName: \"kubernetes.io/projected/0978ab58-ab3c-4265-8674-c2572b9b47b6-kube-api-access-k7jwb\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.871168 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9jg\" (UniqueName: \"kubernetes.io/projected/95af33b4-74c3-4fbf-8286-bc021087c17c-kube-api-access-pt9jg\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.871375 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtrzz\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.871566 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.871728 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a977ae9e-847e-402e-ba1f-b716811ee998-tmpfs\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.871911 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.872236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-srv-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.872420 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a3a7a9-a253-480c-b074-485bc5768d8c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.872628 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-profile-collector-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.872937 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-certs\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.873158 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/308e1e78-75ec-431a-82a9-09437cccd9c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.873539 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/308e1e78-75ec-431a-82a9-09437cccd9c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.873704 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv2wm\" (UniqueName: \"kubernetes.io/projected/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-kube-api-access-xv2wm\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.874014 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.874201 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea4eb52-a889-4cec-8511-f1ef21cc732f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.874515 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-default-certificate\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.874683 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-service-ca-bundle\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.874844 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdsdd\" (UniqueName: \"kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875009 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqv69\" (UniqueName: \"kubernetes.io/projected/5416713e-365f-40d0-b5b5-57e570feaf91-kube-api-access-sqv69\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0978ab58-ab3c-4265-8674-c2572b9b47b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875411 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-metrics-tls\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875570 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-key\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875714 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-stats-auth\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.875889 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-images\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.876122 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.876303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.876478 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fc5\" (UniqueName: \"kubernetes.io/projected/2010b90c-be36-487d-8050-071bac0d5600-kube-api-access-92fc5\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.876659 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-metrics-certs\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.876818 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0978ab58-ab3c-4265-8674-c2572b9b47b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877022 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aa2da99-50b5-4d4f-aa55-b4507cd134be-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877206 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwqts\" (UniqueName: \"kubernetes.io/projected/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-kube-api-access-nwqts\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877339 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-mountpoint-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877490 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2dfd5f25-d490-4570-86ed-bf436c585658-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-config\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.877946 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wp24\" (UniqueName: \"kubernetes.io/projected/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-kube-api-access-6wp24\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878068 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-socket-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878173 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-srv-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878293 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878376 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8jpg\" (UniqueName: \"kubernetes.io/projected/5059db9d-7a66-401f-939c-e94b2bd2eff9-kube-api-access-r8jpg\") pod \"migrator-59844c95c7-rlmdz\" (UID: \"5059db9d-7a66-401f-939c-e94b2bd2eff9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878490 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95af33b4-74c3-4fbf-8286-bc021087c17c-cert\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878612 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-apiservice-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878772 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.878962 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416713e-365f-40d0-b5b5-57e570feaf91-serving-cert\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879069 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfq5\" (UniqueName: \"kubernetes.io/projected/6386751b-4de0-4258-aa09-d4cf545db8b1-kube-api-access-mdfq5\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879178 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879300 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqx9\" (UniqueName: \"kubernetes.io/projected/3387b870-2054-4e0f-97b6-4af4f37bf34d-kube-api-access-gmqx9\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879482 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879603 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9b7\" (UniqueName: \"kubernetes.io/projected/2dfd5f25-d490-4570-86ed-bf436c585658-kube-api-access-vf9b7\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879790 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2010b90c-be36-487d-8050-071bac0d5600-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.879947 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-cabundle\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880066 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0233-e26e-477a-adb9-6b281555b255-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880501 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-config-volume\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880618 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6386751b-4de0-4258-aa09-d4cf545db8b1-proxy-tls\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880695 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880837 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.880990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cn7\" (UniqueName: \"kubernetes.io/projected/d2aa0233-e26e-477a-adb9-6b281555b255-kube-api-access-m4cn7\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881160 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkqzs\" (UniqueName: \"kubernetes.io/projected/189dc2a2-def0-41c0-9a6d-044db219385c-kube-api-access-qkqzs\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881353 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881487 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj29r\" (UniqueName: \"kubernetes.io/projected/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-kube-api-access-wj29r\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881634 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48q6v\" (UniqueName: \"kubernetes.io/projected/7aa2da99-50b5-4d4f-aa55-b4507cd134be-kube-api-access-48q6v\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881800 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aa2da99-50b5-4d4f-aa55-b4507cd134be-proxy-tls\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.881988 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a3a7a9-a253-480c-b074-485bc5768d8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.882113 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-webhook-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: E0318 14:03:42.882367 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.382341134 +0000 UTC m=+207.511469591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.897155 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-metrics-certs\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.909294 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.911313 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.918947 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-stats-auth\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.921115 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.927558 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.927606 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.928483 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-service-ca-bundle\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.952189 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.971670 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-default-certificate\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.973466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.989879 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4jx8\" (UniqueName: \"kubernetes.io/projected/188cb24d-b3cf-46dd-8a07-12afe6ea75e0-kube-api-access-g4jx8\") pod \"router-default-5444994796-xwln7\" (UID: \"188cb24d-b3cf-46dd-8a07-12afe6ea75e0\") " pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.992139 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-images\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.993714 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92fc5\" (UniqueName: \"kubernetes.io/projected/2010b90c-be36-487d-8050-071bac0d5600-kube-api-access-92fc5\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.996070 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.996146 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.996191 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0978ab58-ab3c-4265-8674-c2572b9b47b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.996225 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aa2da99-50b5-4d4f-aa55-b4507cd134be-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998139 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0978ab58-ab3c-4265-8674-c2572b9b47b6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998492 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-images\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998595 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwqts\" (UniqueName: \"kubernetes.io/projected/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-kube-api-access-nwqts\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998616 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-mountpoint-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998635 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2dfd5f25-d490-4570-86ed-bf436c585658-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-config\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998680 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wp24\" (UniqueName: \"kubernetes.io/projected/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-kube-api-access-6wp24\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998703 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-socket-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998718 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-srv-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998738 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8jpg\" (UniqueName: \"kubernetes.io/projected/5059db9d-7a66-401f-939c-e94b2bd2eff9-kube-api-access-r8jpg\") pod \"migrator-59844c95c7-rlmdz\" (UID: \"5059db9d-7a66-401f-939c-e94b2bd2eff9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998774 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998793 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95af33b4-74c3-4fbf-8286-bc021087c17c-cert\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998811 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-apiservice-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998829 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416713e-365f-40d0-b5b5-57e570feaf91-serving-cert\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998849 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdfq5\" (UniqueName: \"kubernetes.io/projected/6386751b-4de0-4258-aa09-d4cf545db8b1-kube-api-access-mdfq5\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998877 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998896 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqx9\" (UniqueName: \"kubernetes.io/projected/3387b870-2054-4e0f-97b6-4af4f37bf34d-kube-api-access-gmqx9\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998927 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf9b7\" (UniqueName: \"kubernetes.io/projected/2dfd5f25-d490-4570-86ed-bf436c585658-kube-api-access-vf9b7\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998943 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-cabundle\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998969 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2010b90c-be36-487d-8050-071bac0d5600-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.998994 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0233-e26e-477a-adb9-6b281555b255-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999010 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-config-volume\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999029 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999043 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cn7\" (UniqueName: \"kubernetes.io/projected/d2aa0233-e26e-477a-adb9-6b281555b255-kube-api-access-m4cn7\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999059 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6386751b-4de0-4258-aa09-d4cf545db8b1-proxy-tls\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999074 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999091 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkqzs\" (UniqueName: \"kubernetes.io/projected/189dc2a2-def0-41c0-9a6d-044db219385c-kube-api-access-qkqzs\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999108 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj29r\" (UniqueName: \"kubernetes.io/projected/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-kube-api-access-wj29r\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999123 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48q6v\" (UniqueName: \"kubernetes.io/projected/7aa2da99-50b5-4d4f-aa55-b4507cd134be-kube-api-access-48q6v\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aa2da99-50b5-4d4f-aa55-b4507cd134be-proxy-tls\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-webhook-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999184 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a3a7a9-a253-480c-b074-485bc5768d8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999200 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999220 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-csi-data-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999236 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea4eb52-a889-4cec-8511-f1ef21cc732f-config\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999253 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-plugins-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999267 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4eb52-a889-4cec-8511-f1ef21cc732f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999286 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8a3a7a9-a253-480c-b074-485bc5768d8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999310 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416713e-365f-40d0-b5b5-57e570feaf91-config\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999325 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l2nh\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-kube-api-access-6l2nh\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999339 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-registration-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999358 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78d6q\" (UniqueName: \"kubernetes.io/projected/f7e89fbf-ede1-47f3-84dc-54b8471fa052-kube-api-access-78d6q\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999377 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999392 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jwb\" (UniqueName: \"kubernetes.io/projected/0978ab58-ab3c-4265-8674-c2572b9b47b6-kube-api-access-k7jwb\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999408 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-node-bootstrap-token\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999423 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzk6\" (UniqueName: \"kubernetes.io/projected/a977ae9e-847e-402e-ba1f-b716811ee998-kube-api-access-2vzk6\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999437 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt9jg\" (UniqueName: \"kubernetes.io/projected/95af33b4-74c3-4fbf-8286-bc021087c17c-kube-api-access-pt9jg\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999473 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a977ae9e-847e-402e-ba1f-b716811ee998-tmpfs\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999490 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999504 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-srv-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999519 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a3a7a9-a253-480c-b074-485bc5768d8c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999538 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-profile-collector-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999560 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-certs\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999576 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/308e1e78-75ec-431a-82a9-09437cccd9c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999600 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999614 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/308e1e78-75ec-431a-82a9-09437cccd9c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999631 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv2wm\" (UniqueName: \"kubernetes.io/projected/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-kube-api-access-xv2wm\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999653 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea4eb52-a889-4cec-8511-f1ef21cc732f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdsdd\" (UniqueName: \"kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999691 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqv69\" (UniqueName: \"kubernetes.io/projected/5416713e-365f-40d0-b5b5-57e570feaf91-kube-api-access-sqv69\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999706 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0978ab58-ab3c-4265-8674-c2572b9b47b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:42 crc kubenswrapper[4857]: I0318 14:03:42.999725 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-key\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:42.999741 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-metrics-tls\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.004457 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-mountpoint-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.004700 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7aa2da99-50b5-4d4f-aa55-b4507cd134be-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.005077 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6386751b-4de0-4258-aa09-d4cf545db8b1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.135687 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-csi-data-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.136839 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-config\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.154566 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2aa0233-e26e-477a-adb9-6b281555b255-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.156984 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-config-volume\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.168595 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.169841 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a977ae9e-847e-402e-ba1f-b716811ee998-tmpfs\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.170090 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.670056831 +0000 UTC m=+207.799185288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.170849 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea4eb52-a889-4cec-8511-f1ef21cc732f-config\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.171137 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-plugins-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.171256 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.172265 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-apiservice-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.172362 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5416713e-365f-40d0-b5b5-57e570feaf91-config\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.172649 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-registration-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.173193 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189dc2a2-def0-41c0-9a6d-044db219385c-socket-dir\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.174702 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.176103 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-cabundle\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.177822 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a3a7a9-a253-480c-b074-485bc5768d8c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.188086 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-srv-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.188803 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6386751b-4de0-4258-aa09-d4cf545db8b1-proxy-tls\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.189395 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.192354 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5416713e-365f-40d0-b5b5-57e570feaf91-serving-cert\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.193780 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a977ae9e-847e-402e-ba1f-b716811ee998-webhook-cert\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.198986 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8a3a7a9-a253-480c-b074-485bc5768d8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.221586 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtrzz\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.222111 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/308e1e78-75ec-431a-82a9-09437cccd9c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.222574 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-profile-collector-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.233217 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/308e1e78-75ec-431a-82a9-09437cccd9c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.237038 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l2nh\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-kube-api-access-6l2nh\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.239627 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92fc5\" (UniqueName: \"kubernetes.io/projected/2010b90c-be36-487d-8050-071bac0d5600-kube-api-access-92fc5\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.240676 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.241156 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.741136358 +0000 UTC m=+207.870264815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.241289 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.242858 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.742848114 +0000 UTC m=+207.871976571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.243814 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7aa2da99-50b5-4d4f-aa55-b4507cd134be-proxy-tls\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.243946 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48q6v\" (UniqueName: \"kubernetes.io/projected/7aa2da99-50b5-4d4f-aa55-b4507cd134be-kube-api-access-48q6v\") pod \"machine-config-controller-84d6567774-hl9jv\" (UID: \"7aa2da99-50b5-4d4f-aa55-b4507cd134be\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.248022 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2010b90c-be36-487d-8050-071bac0d5600-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wp82x\" (UID: \"2010b90c-be36-487d-8050-071bac0d5600\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.248324 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-node-bootstrap-token\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.248886 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3387b870-2054-4e0f-97b6-4af4f37bf34d-srv-cert\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.249147 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.249278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-metrics-tls\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.249361 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.252704 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj29r\" (UniqueName: \"kubernetes.io/projected/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-kube-api-access-wj29r\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.253125 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2dfd5f25-d490-4570-86ed-bf436c585658-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.254428 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f7e89fbf-ede1-47f3-84dc-54b8471fa052-certs\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.309085 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwqts\" (UniqueName: \"kubernetes.io/projected/3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac-kube-api-access-nwqts\") pod \"dns-default-jkjbk\" (UID: \"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac\") " pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.309335 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.331148 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wp24\" (UniqueName: \"kubernetes.io/projected/ec0b6656-7dff-430f-b121-5bbbc7bc8fc9-kube-api-access-6wp24\") pod \"kube-storage-version-migrator-operator-b67b599dd-qrcrr\" (UID: \"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.345259 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea4eb52-a889-4cec-8511-f1ef21cc732f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.346020 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78d6q\" (UniqueName: \"kubernetes.io/projected/f7e89fbf-ede1-47f3-84dc-54b8471fa052-kube-api-access-78d6q\") pod \"machine-config-server-mqx8m\" (UID: \"f7e89fbf-ede1-47f3-84dc-54b8471fa052\") " pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.346467 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.347106 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:43.847078208 +0000 UTC m=+207.976206665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.352607 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8jpg\" (UniqueName: \"kubernetes.io/projected/5059db9d-7a66-401f-939c-e94b2bd2eff9-kube-api-access-r8jpg\") pod \"migrator-59844c95c7-rlmdz\" (UID: \"5059db9d-7a66-401f-939c-e94b2bd2eff9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.353618 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.354936 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ea4eb52-a889-4cec-8511-f1ef21cc732f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qdffj\" (UID: \"0ea4eb52-a889-4cec-8511-f1ef21cc732f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.355524 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.359076 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqx9\" (UniqueName: \"kubernetes.io/projected/3387b870-2054-4e0f-97b6-4af4f37bf34d-kube-api-access-gmqx9\") pod \"catalog-operator-68c6474976-frk6c\" (UID: \"3387b870-2054-4e0f-97b6-4af4f37bf34d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.360980 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdsdd\" (UniqueName: \"kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd\") pod \"collect-profiles-29564040-b8w4t\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.361097 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cn7\" (UniqueName: \"kubernetes.io/projected/d2aa0233-e26e-477a-adb9-6b281555b255-kube-api-access-m4cn7\") pod \"package-server-manager-789f6589d5-gh9dk\" (UID: \"d2aa0233-e26e-477a-adb9-6b281555b255\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.361452 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkqzs\" (UniqueName: \"kubernetes.io/projected/189dc2a2-def0-41c0-9a6d-044db219385c-kube-api-access-qkqzs\") pod \"csi-hostpathplugin-hk5gs\" (UID: \"189dc2a2-def0-41c0-9a6d-044db219385c\") " pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.364877 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.368403 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdfq5\" (UniqueName: \"kubernetes.io/projected/6386751b-4de0-4258-aa09-d4cf545db8b1-kube-api-access-mdfq5\") pod \"machine-config-operator-74547568cd-prb9h\" (UID: \"6386751b-4de0-4258-aa09-d4cf545db8b1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.369403 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0978ab58-ab3c-4265-8674-c2572b9b47b6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.370576 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jwb\" (UniqueName: \"kubernetes.io/projected/0978ab58-ab3c-4265-8674-c2572b9b47b6-kube-api-access-k7jwb\") pod \"openshift-apiserver-operator-796bbdcf4f-mbv77\" (UID: \"0978ab58-ab3c-4265-8674-c2572b9b47b6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.371085 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf9b7\" (UniqueName: \"kubernetes.io/projected/2dfd5f25-d490-4570-86ed-bf436c585658-kube-api-access-vf9b7\") pod \"control-plane-machine-set-operator-78cbb6b69f-7fk6f\" (UID: \"2dfd5f25-d490-4570-86ed-bf436c585658\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.371086 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/308e1e78-75ec-431a-82a9-09437cccd9c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-r5wln\" (UID: \"308e1e78-75ec-431a-82a9-09437cccd9c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.371515 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt9jg\" (UniqueName: \"kubernetes.io/projected/95af33b4-74c3-4fbf-8286-bc021087c17c-kube-api-access-pt9jg\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.371970 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" event={"ID":"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd","Type":"ContainerStarted","Data":"53cca3c1919e561765d21e6f2216deb8a17048dfb13b2b756e1d9f5ffed3f117"} Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.372669 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" event={"ID":"0cbc9065-8609-4637-958c-805de5c08411","Type":"ContainerStarted","Data":"f7b0c3db5c3c75f0d426103ffee25a1c28f834d1a12345ded0e7b63483a591d3"} Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.372352 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mswzz\" (UID: \"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.372505 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8a3a7a9-a253-480c-b074-485bc5768d8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-drmwr\" (UID: \"f8a3a7a9-a253-480c-b074-485bc5768d8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.372406 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95af33b4-74c3-4fbf-8286-bc021087c17c-cert\") pod \"ingress-canary-r69mf\" (UID: \"95af33b4-74c3-4fbf-8286-bc021087c17c\") " pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.390627 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25-signing-key\") pod \"service-ca-9c57cc56f-v6brz\" (UID: \"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25\") " pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.390643 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.390726 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.394289 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.402992 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.407510 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzk6\" (UniqueName: \"kubernetes.io/projected/a977ae9e-847e-402e-ba1f-b716811ee998-kube-api-access-2vzk6\") pod \"packageserver-d55dfcdfc-298nc\" (UID: \"a977ae9e-847e-402e-ba1f-b716811ee998\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.407706 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv2wm\" (UniqueName: \"kubernetes.io/projected/8ad51d9d-dcd1-467e-9aa6-162d19c035ed-kube-api-access-xv2wm\") pod \"olm-operator-6b444d44fb-85tjg\" (UID: \"8ad51d9d-dcd1-467e-9aa6-162d19c035ed\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.411903 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.421852 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4cprr"] Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.421901 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.426348 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.433933 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.665581 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mqx8m" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.665639 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.665670 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.666143 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.666562 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.666693 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.667189 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.667234 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.668250 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqv69\" (UniqueName: \"kubernetes.io/projected/5416713e-365f-40d0-b5b5-57e570feaf91-kube-api-access-sqv69\") pod \"service-ca-operator-777779d784-kx9ws\" (UID: \"5416713e-365f-40d0-b5b5-57e570feaf91\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.668431 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r69mf" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.666578 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.671941 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.672384 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.172369145 +0000 UTC m=+208.301497602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.672558 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.688824 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.689547 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c"] Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.776423 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.777045 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.2770218 +0000 UTC m=+208.406150257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.788141 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5rwkm"] Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.820234 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:03:43 crc kubenswrapper[4857]: W0318 14:03:43.850002 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod188cb24d_b3cf_46dd_8a07_12afe6ea75e0.slice/crio-0def8c8d206b5e0b6fb6d25bd1a6a2e4e1127e67464d186b0a1489457bb6ad6b WatchSource:0}: Error finding container 0def8c8d206b5e0b6fb6d25bd1a6a2e4e1127e67464d186b0a1489457bb6ad6b: Status 404 returned error can't find the container with id 0def8c8d206b5e0b6fb6d25bd1a6a2e4e1127e67464d186b0a1489457bb6ad6b Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.878793 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.879372 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.379355272 +0000 UTC m=+208.508483739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: W0318 14:03:43.927716 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2c5cd45_6030_4ba1_96fc_ffc82b00af1e.slice/crio-6e05b8b52ad281994689c98753614bf0713030eb164e71b9cf271678a90d4206 WatchSource:0}: Error finding container 6e05b8b52ad281994689c98753614bf0713030eb164e71b9cf271678a90d4206: Status 404 returned error can't find the container with id 6e05b8b52ad281994689c98753614bf0713030eb164e71b9cf271678a90d4206 Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.977998 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.979703 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.979881 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.479845204 +0000 UTC m=+208.608973661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:43 crc kubenswrapper[4857]: I0318 14:03:43.980043 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:43 crc kubenswrapper[4857]: E0318 14:03:43.980380 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.480368058 +0000 UTC m=+208.609496515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.080991 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.081432 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.581411755 +0000 UTC m=+208.710540212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.082722 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv"] Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.087500 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.087560 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fr8cx"] Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.181889 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.182550 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.682535043 +0000 UTC m=+208.811663500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.282969 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.283644 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.783613561 +0000 UTC m=+208.912742018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.368879 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" event={"ID":"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd","Type":"ContainerStarted","Data":"7cf4263fc09db517fa7fbc5e6ab371239d02542068443b3c0f92cca335fc1134"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.370526 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.371891 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mqx8m" event={"ID":"f7e89fbf-ede1-47f3-84dc-54b8471fa052","Type":"ContainerStarted","Data":"56d5767f6b6347155427565031263e7f9ad9f1e50737a622c70afb40e904a0d9"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.373938 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" event={"ID":"096c78f1-127f-4281-81b4-22ff1fd40e04","Type":"ContainerStarted","Data":"ea9a1bde092377ecae9238d9d3cd4945761765f1a09d64777089c9ffc53ce228"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.375150 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerStarted","Data":"8525f8cb35a66960d78eb3624fc34d8565a8a7f67c2aa6264f65f4772b91dc7f"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.378653 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwln7" event={"ID":"188cb24d-b3cf-46dd-8a07-12afe6ea75e0","Type":"ContainerStarted","Data":"0def8c8d206b5e0b6fb6d25bd1a6a2e4e1127e67464d186b0a1489457bb6ad6b"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.385666 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.387197 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" event={"ID":"867f36a7-afd9-4d67-a7d3-42f2ca67ac91","Type":"ContainerStarted","Data":"f84cbf52583b5e9127589e656ae3cad2952a54fc1dff25724f00e00a6edecaeb"} Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.387632 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:44.887613208 +0000 UTC m=+209.016741665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.391318 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" event={"ID":"e4e4af7c-f5d3-4b12-b419-70dbae8cab23","Type":"ContainerStarted","Data":"1a0468918ef69baa38ff83f2c5031f7fc33d499a390fee557f74a0eda0d0af39"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.392558 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" event={"ID":"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e","Type":"ContainerStarted","Data":"6e05b8b52ad281994689c98753614bf0713030eb164e71b9cf271678a90d4206"} Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.439837 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.439916 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.648447 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.649721 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.149698034 +0000 UTC m=+209.278826481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:44 crc kubenswrapper[4857]: I0318 14:03:44.848127 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:44 crc kubenswrapper[4857]: E0318 14:03:44.848581 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.348567459 +0000 UTC m=+209.477695916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.089782 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.090356 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.590323079 +0000 UTC m=+209.719451536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.128029 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podStartSLOduration=160.128009931 podStartE2EDuration="2m40.128009931s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:45.126315494 +0000 UTC m=+209.255443951" watchObservedRunningTime="2026-03-18 14:03:45.128009931 +0000 UTC m=+209.257138388" Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.191702 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.192213 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.692186588 +0000 UTC m=+209.821315045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.293262 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.293636 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.793613435 +0000 UTC m=+209.922741892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.395279 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.395922 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.895860823 +0000 UTC m=+210.024989340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.489322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" event={"ID":"85980a8c-19a9-4b94-8d91-a7fdbad22cab","Type":"ContainerStarted","Data":"41e57326855482c27d9179d5120e97b9392bec600bde6166abd5a240d1727e0e"} Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.496214 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.496487 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:45.996470408 +0000 UTC m=+210.125598865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.623731 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.624474 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:46.124449962 +0000 UTC m=+210.253578419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.734839 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.735458 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:46.235425521 +0000 UTC m=+210.364553978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.749245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" event={"ID":"e8c4acb6-a177-4139-ba23-512a709d4033","Type":"ContainerStarted","Data":"11cff5bd51649ed8ca9e598a2383787a4e34e10bce36e1a7112c2c47a2c89d8d"} Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.754086 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" event={"ID":"0cbc9065-8609-4637-958c-805de5c08411","Type":"ContainerStarted","Data":"0d39324f32bfb5b28082e4fba621435b087d86e672ecd04a1cabcdcd87162353"} Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.757303 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.757367 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 14:03:45 crc kubenswrapper[4857]: I0318 14:03:45.841840 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:45 crc kubenswrapper[4857]: E0318 14:03:45.842332 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:46.342310317 +0000 UTC m=+210.471438824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.062779 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.063528 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:46.563507814 +0000 UTC m=+210.692636271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.320686 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.321158 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:46.821143358 +0000 UTC m=+210.950271815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.512081 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.512639 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.012617631 +0000 UTC m=+211.141746098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.615498 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.616124 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.116091074 +0000 UTC m=+211.245219711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.716276 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.716638 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.216622006 +0000 UTC m=+211.345750463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.797710 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" event={"ID":"e4e4af7c-f5d3-4b12-b419-70dbae8cab23","Type":"ContainerStarted","Data":"6e8e747879c1f7edeefab0b852d7eecc80f7f85fd951ba4cb56a6f5e360a9588"} Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.810290 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" event={"ID":"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e","Type":"ContainerStarted","Data":"b6a745640825244382102719f62339e633eb094ae46221f41cd6ca61a83ede65"} Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.812162 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.820392 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.825411 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.325388444 +0000 UTC m=+211.454516901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.830891 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwln7" event={"ID":"188cb24d-b3cf-46dd-8a07-12afe6ea75e0","Type":"ContainerStarted","Data":"9e1d771ac94691530ef3bb4ca8c937f2d9df0afbf7d4d30ec5b3a738cd2890a9"} Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.834354 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.834446 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.838674 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podStartSLOduration=161.838635437 podStartE2EDuration="2m41.838635437s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:46.8288989 +0000 UTC m=+210.958027377" watchObservedRunningTime="2026-03-18 14:03:46.838635437 +0000 UTC m=+210.967763894" Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.922134 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.922490 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.422443721 +0000 UTC m=+211.551572178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.928672 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:46 crc kubenswrapper[4857]: E0318 14:03:46.929265 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.429247168 +0000 UTC m=+211.558375625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.948825 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kndt2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.948895 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Mar 18 14:03:46 crc kubenswrapper[4857]: I0318 14:03:46.949462 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" podStartSLOduration=160.949447031 podStartE2EDuration="2m40.949447031s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:46.948431053 +0000 UTC m=+211.077559510" watchObservedRunningTime="2026-03-18 14:03:46.949447031 +0000 UTC m=+211.078575498" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.161047 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mqx8m" podStartSLOduration=7.161018644 podStartE2EDuration="7.161018644s" podCreationTimestamp="2026-03-18 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:46.971527155 +0000 UTC m=+211.100655622" watchObservedRunningTime="2026-03-18 14:03:47.161018644 +0000 UTC m=+211.290147101" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.161623 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xwln7" podStartSLOduration=161.16161744 podStartE2EDuration="2m41.16161744s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:47.156719496 +0000 UTC m=+211.285847953" watchObservedRunningTime="2026-03-18 14:03:47.16161744 +0000 UTC m=+211.290745897" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.187197 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.187817 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.687790077 +0000 UTC m=+211.816918544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.394336 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.395596 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.396213 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.896190412 +0000 UTC m=+212.025318869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.496589 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.497082 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:47.997061874 +0000 UTC m=+212.126190331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.598679 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.599197 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.09917449 +0000 UTC m=+212.228302947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.699723 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.700112 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.200094393 +0000 UTC m=+212.329222850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.712807 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.712869 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.801664 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.802180 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.302160928 +0000 UTC m=+212.431289385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.838055 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" event={"ID":"85980a8c-19a9-4b94-8d91-a7fdbad22cab","Type":"ContainerStarted","Data":"92b85c138b7cc7b77baa8714d11e2bafabcd582161fb514ebedd69ad823d4b9c"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.846330 4857 generic.go:334] "Generic (PLEG): container finished" podID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerID="2924ac95bc2ce8de4b6afbd25ce8c73110982a3556e7d4656837edb08fa16d86" exitCode=0 Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.846501 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerDied","Data":"2924ac95bc2ce8de4b6afbd25ce8c73110982a3556e7d4656837edb08fa16d86"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.856645 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" event={"ID":"e8c4acb6-a177-4139-ba23-512a709d4033","Type":"ContainerStarted","Data":"ce51fcdf9cf0548a945e87b60767dc31f46ee0550c7f0be11cbfea3f3d39f720"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.858033 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.861934 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" event={"ID":"096c78f1-127f-4281-81b4-22ff1fd40e04","Type":"ContainerStarted","Data":"0af00e89796f2602fe2db2debc24cf2d855689ee24320d079c1d436cd7689720"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.869565 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fr8cx" podStartSLOduration=161.869533952 podStartE2EDuration="2m41.869533952s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:47.869299236 +0000 UTC m=+211.998427693" watchObservedRunningTime="2026-03-18 14:03:47.869533952 +0000 UTC m=+211.998662409" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.872866 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" event={"ID":"0cbc9065-8609-4637-958c-805de5c08411","Type":"ContainerStarted","Data":"520b7b49dd8f2a77e83bb1edb740ecad24491b95e699f4216fe95c2046afa5bf"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.905138 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.905426 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" event={"ID":"867f36a7-afd9-4d67-a7d3-42f2ca67ac91","Type":"ContainerStarted","Data":"d7a9cb313bec31ca1e4b82f6e74b4c77c481d56c872b55b051425f2e6186ecdb"} Mar 18 14:03:47 crc kubenswrapper[4857]: E0318 14:03:47.906217 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.406179876 +0000 UTC m=+212.535308363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.908135 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.927700 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" podStartSLOduration=162.927659144 podStartE2EDuration="2m42.927659144s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:47.911381638 +0000 UTC m=+212.040510095" watchObservedRunningTime="2026-03-18 14:03:47.927659144 +0000 UTC m=+212.056787601" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.929007 4857 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gxtb9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.929086 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.933450 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" event={"ID":"e427c5bb-ebf9-4836-8a31-9968569fbe48","Type":"ContainerStarted","Data":"e1970e6c30c307dc30214c8cc07fbf5fb4b65914501a7f4eb6ebfdd09edb2bb9"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.950190 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mqx8m" event={"ID":"f7e89fbf-ede1-47f3-84dc-54b8471fa052","Type":"ContainerStarted","Data":"0957052e49200f9bf1c79a635295a25d15286061647fd928213736d0638539a0"} Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.950904 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kndt2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.950968 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.959016 4857 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rd2fj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Mar 18 14:03:47 crc kubenswrapper[4857]: I0318 14:03:47.959107 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.120125 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.122824 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.622799037 +0000 UTC m=+212.751927494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.142363 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fkw6z" podStartSLOduration=163.142345302 podStartE2EDuration="2m43.142345302s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:47.980762998 +0000 UTC m=+212.109891455" watchObservedRunningTime="2026-03-18 14:03:48.142345302 +0000 UTC m=+212.271473759" Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.143449 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" podStartSLOduration=162.143443682 podStartE2EDuration="2m42.143443682s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:48.142736963 +0000 UTC m=+212.271865440" watchObservedRunningTime="2026-03-18 14:03:48.143443682 +0000 UTC m=+212.272572139" Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.221329 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.227923 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.727881154 +0000 UTC m=+212.857009611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.228226 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.230365 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.730345061 +0000 UTC m=+212.859473518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.329553 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.330240 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.830213566 +0000 UTC m=+212.959342023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.416222 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:48 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:48 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:48 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.416308 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.431551 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.432056 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:48.932040934 +0000 UTC m=+213.061169401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.434656 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.448907 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564042-j5cmc"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.456444 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xsbrw"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.463710 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr84c"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.479236 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995"] Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.481226 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod287df787_86a7_4a56_b5a1_fb55b6bed91b.slice/crio-da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03 WatchSource:0}: Error finding container da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03: Status 404 returned error can't find the container with id da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03 Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.492533 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.494499 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.495024 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.500085 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04d82f58_0677_4450_baff_d3620aa86b32.slice/crio-4f2783d4a2f765938ad781c03fadfebffaf3b8425c23005cfc866525ce1a4b32 WatchSource:0}: Error finding container 4f2783d4a2f765938ad781c03fadfebffaf3b8425c23005cfc866525ce1a4b32: Status 404 returned error can't find the container with id 4f2783d4a2f765938ad781c03fadfebffaf3b8425c23005cfc866525ce1a4b32 Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.501819 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ee9206_490f_4303_9ee7_198148cb3227.slice/crio-1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9 WatchSource:0}: Error finding container 1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9: Status 404 returned error can't find the container with id 1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9 Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.507518 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4300327_af6f_4261_8973_ef640d24993f.slice/crio-8d6c0387fd8c0631c23c544f085687a216ca843b3d0769c5d28cf122f9345c2b WatchSource:0}: Error finding container 8d6c0387fd8c0631c23c544f085687a216ca843b3d0769c5d28cf122f9345c2b: Status 404 returned error can't find the container with id 8d6c0387fd8c0631c23c544f085687a216ca843b3d0769c5d28cf122f9345c2b Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.533332 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.533793 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.033776059 +0000 UTC m=+213.162904516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.636101 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.636593 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.136575734 +0000 UTC m=+213.265704191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.682106 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hk5gs"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.687383 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.690796 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gvkpz"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.693137 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.713454 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.717452 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.720373 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f"] Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.720685 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd067c327_e7cb_4fbc_a54f_4ac7bd9c7825.slice/crio-4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad WatchSource:0}: Error finding container 4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad: Status 404 returned error can't find the container with id 4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.732057 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.736881 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.737124 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.237101616 +0000 UTC m=+213.366230063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.738625 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.741407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.741924 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.241905608 +0000 UTC m=+213.371034065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.742020 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln"] Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.752858 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5059db9d_7a66_401f_939c_e94b2bd2eff9.slice/crio-f46b96b16b9cde36ba0df938af745f8ccabc8cab1ddda10baf34f47fcb5141a7 WatchSource:0}: Error finding container f46b96b16b9cde36ba0df938af745f8ccabc8cab1ddda10baf34f47fcb5141a7: Status 404 returned error can't find the container with id f46b96b16b9cde36ba0df938af745f8ccabc8cab1ddda10baf34f47fcb5141a7 Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.758019 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.768390 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr"] Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.768552 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189dc2a2_def0_41c0_9a6d_044db219385c.slice/crio-12952635e0fc47a1790cdda6021864b9df218c1bddec84f4d2e4916717cc8b64 WatchSource:0}: Error finding container 12952635e0fc47a1790cdda6021864b9df218c1bddec84f4d2e4916717cc8b64: Status 404 returned error can't find the container with id 12952635e0fc47a1790cdda6021864b9df218c1bddec84f4d2e4916717cc8b64 Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.835250 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.843445 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.844025 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.344003073 +0000 UTC m=+213.473131530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.846051 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5a03aa3_6810_477f_8f45_79abe51d7d7e.slice/crio-8727af4fbdb5b366c2c98c1b5e94ba5cb0d061a049585c3b6b0d2dd030633267 WatchSource:0}: Error finding container 8727af4fbdb5b366c2c98c1b5e94ba5cb0d061a049585c3b6b0d2dd030633267: Status 404 returned error can't find the container with id 8727af4fbdb5b366c2c98c1b5e94ba5cb0d061a049585c3b6b0d2dd030633267 Mar 18 14:03:48 crc kubenswrapper[4857]: W0318 14:03:48.858240 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3387b870_2054_4e0f_97b6_4af4f37bf34d.slice/crio-4964d54de9bf03e10a834540dd59b482bf13cbdcb45c0a6a1ce79f8b16286284 WatchSource:0}: Error finding container 4964d54de9bf03e10a834540dd59b482bf13cbdcb45c0a6a1ce79f8b16286284: Status 404 returned error can't find the container with id 4964d54de9bf03e10a834540dd59b482bf13cbdcb45c0a6a1ce79f8b16286284 Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.907553 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.910042 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.910669 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r69mf"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.936653 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wp82x"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.936711 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.951335 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:48 crc kubenswrapper[4857]: E0318 14:03:48.951658 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.451643439 +0000 UTC m=+213.580771896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.974006 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj"] Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.989322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" event={"ID":"189dc2a2-def0-41c0-9a6d-044db219385c","Type":"ContainerStarted","Data":"12952635e0fc47a1790cdda6021864b9df218c1bddec84f4d2e4916717cc8b64"} Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.995126 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" event={"ID":"d4300327-af6f-4261-8973-ef640d24993f","Type":"ContainerStarted","Data":"8d6c0387fd8c0631c23c544f085687a216ca843b3d0769c5d28cf122f9345c2b"} Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.996409 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" event={"ID":"f8a3a7a9-a253-480c-b074-485bc5768d8c","Type":"ContainerStarted","Data":"ab9622d3993e65766b027704e940e337f8d5e23832d709a4740acff32d7771b2"} Mar 18 14:03:48 crc kubenswrapper[4857]: I0318 14:03:48.998088 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" event={"ID":"5059db9d-7a66-401f-939c-e94b2bd2eff9","Type":"ContainerStarted","Data":"f46b96b16b9cde36ba0df938af745f8ccabc8cab1ddda10baf34f47fcb5141a7"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.001689 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" event={"ID":"096c78f1-127f-4281-81b4-22ff1fd40e04","Type":"ContainerStarted","Data":"98bed3bd902ce090ba1156072cb6b5cfe744bcd08af1c623a5aab1d204b7bced"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.016474 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jkjbk"] Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.033526 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" event={"ID":"95c8f32c-92c3-41f9-b4ca-e7ca90c22845","Type":"ContainerStarted","Data":"1a0f9fc1a504f5e6f5251e03839e2130582aaf2eb068ebdf5652465b0e6e59e1"} Mar 18 14:03:49 crc kubenswrapper[4857]: W0318 14:03:49.033959 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5416713e_365f_40d0_b5b5_57e570feaf91.slice/crio-92cadd71da61043e879df420dd9ef9539513b09e5cb84126d7711e2a9da88688 WatchSource:0}: Error finding container 92cadd71da61043e879df420dd9ef9539513b09e5cb84126d7711e2a9da88688: Status 404 returned error can't find the container with id 92cadd71da61043e879df420dd9ef9539513b09e5cb84126d7711e2a9da88688 Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.036107 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5rwkm" podStartSLOduration=163.036087001 podStartE2EDuration="2m43.036087001s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.034495558 +0000 UTC m=+213.163624035" watchObservedRunningTime="2026-03-18 14:03:49.036087001 +0000 UTC m=+213.165215458" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.052365 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.053161 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.553140518 +0000 UTC m=+213.682268975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.075259 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" event={"ID":"1a8c5344-76bd-4d55-aab5-d1a100a5c08c","Type":"ContainerStarted","Data":"6dae5536d3c6a0bf2f54814dc3271dabd1cd8f0c51eed186d35de70f725f7bcb"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.075310 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" event={"ID":"1a8c5344-76bd-4d55-aab5-d1a100a5c08c","Type":"ContainerStarted","Data":"923a3ab85ea9ab9c0f41ed098de986cfc93e762914e88451cb8e456f4c146d75"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.076834 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.112411 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" podStartSLOduration=163.112387271 podStartE2EDuration="2m43.112387271s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.108558136 +0000 UTC m=+213.237686603" watchObservedRunningTime="2026-03-18 14:03:49.112387271 +0000 UTC m=+213.241515728" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.112796 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" event={"ID":"e427c5bb-ebf9-4836-8a31-9968569fbe48","Type":"ContainerStarted","Data":"46fcc59b0508163560e9309fbcb5d49cff08dc54e45c9d95280a377e30820565"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.112834 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" event={"ID":"e427c5bb-ebf9-4836-8a31-9968569fbe48","Type":"ContainerStarted","Data":"957b9df5bc144ef519afda74f26d1710626c9d6b986a942d3f5111e0a74e0937"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.118332 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk"] Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.118473 4857 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5t7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.118514 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.120264 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" event={"ID":"2dfd5f25-d490-4570-86ed-bf436c585658","Type":"ContainerStarted","Data":"f0d0768b70676f4a6b72771b64611c846a84f84e6e43ff03baf29e3b17f3253a"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.122188 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" event={"ID":"8ad51d9d-dcd1-467e-9aa6-162d19c035ed","Type":"ContainerStarted","Data":"fb0e63abbf8b2bafd34a95d8819dfd99ad5b9a1ee9e27a8f3da7b1bb0b7de8e3"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.126799 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerStarted","Data":"aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.126914 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.165925 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.169579 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.669560386 +0000 UTC m=+213.798688843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.199236 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52cxv" podStartSLOduration=164.199211168 podStartE2EDuration="2m44.199211168s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.163354626 +0000 UTC m=+213.292483083" watchObservedRunningTime="2026-03-18 14:03:49.199211168 +0000 UTC m=+213.328339625" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.206293 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podStartSLOduration=164.206274551 podStartE2EDuration="2m44.206274551s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.197971094 +0000 UTC m=+213.327099551" watchObservedRunningTime="2026-03-18 14:03:49.206274551 +0000 UTC m=+213.335403008" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.220153 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" event={"ID":"04d82f58-0677-4450-baff-d3620aa86b32","Type":"ContainerStarted","Data":"44abe483027295e46fc754d03357578aa6302e987e9ce94fb7eb19077b59f5f8"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.220197 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" event={"ID":"04d82f58-0677-4450-baff-d3620aa86b32","Type":"ContainerStarted","Data":"4f2783d4a2f765938ad781c03fadfebffaf3b8425c23005cfc866525ce1a4b32"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.220217 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v6brz"] Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.234394 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" event={"ID":"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1","Type":"ContainerStarted","Data":"9d23424e6e255bce044456765b9df1dc08775f47c985edabb8ee5c43c4a38a0c"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.246868 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc"] Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.248051 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" event={"ID":"287df787-86a7-4a56-b5a1-fb55b6bed91b","Type":"ContainerStarted","Data":"da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.258991 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xf995" podStartSLOduration=164.258941713 podStartE2EDuration="2m44.258941713s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.236623272 +0000 UTC m=+213.365751759" watchObservedRunningTime="2026-03-18 14:03:49.258941713 +0000 UTC m=+213.388070170" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.267109 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.267713 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.767688163 +0000 UTC m=+213.896816620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.269138 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" event={"ID":"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d","Type":"ContainerStarted","Data":"9d26a17e265fff9518983e41d895a0ad938513582a4c68b718be84faa78bd2f4"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.269163 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60870: no serving certificate available for the kubelet" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.275447 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" event={"ID":"f5a03aa3-6810-477f-8f45-79abe51d7d7e","Type":"ContainerStarted","Data":"8727af4fbdb5b366c2c98c1b5e94ba5cb0d061a049585c3b6b0d2dd030633267"} Mar 18 14:03:49 crc kubenswrapper[4857]: W0318 14:03:49.297020 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a45c6f_d5f7_49f7_9cf2_034bb7ea0b25.slice/crio-6469158449f2e0d76ffbc03d5bcf16c71ff5f0dccfa0b6ae53d7b5ae31860043 WatchSource:0}: Error finding container 6469158449f2e0d76ffbc03d5bcf16c71ff5f0dccfa0b6ae53d7b5ae31860043: Status 404 returned error can't find the container with id 6469158449f2e0d76ffbc03d5bcf16c71ff5f0dccfa0b6ae53d7b5ae31860043 Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.298766 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerStarted","Data":"7759d088059197967c089884f3f91531407e2fc4f6ee46046b6d46e63eecd7a9"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.302450 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" event={"ID":"3387b870-2054-4e0f-97b6-4af4f37bf34d","Type":"ContainerStarted","Data":"4964d54de9bf03e10a834540dd59b482bf13cbdcb45c0a6a1ce79f8b16286284"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.314216 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" event={"ID":"0978ab58-ab3c-4265-8674-c2572b9b47b6","Type":"ContainerStarted","Data":"ebb3a81abcae4e5797b441117d282261bc6f345d565ce8902f1d354e1032380e"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.325639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" event={"ID":"308e1e78-75ec-431a-82a9-09437cccd9c9","Type":"ContainerStarted","Data":"355ca61e26f1e950159d2e428e4d49e83741d5f87aa0fbf8679f875657b04081"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.327500 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:49 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:49 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:49 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.327560 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.328154 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" event={"ID":"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9","Type":"ContainerStarted","Data":"54ab2cb8d2dfe8c3f0dd3f9544caf3bfa5342ab84ef5cd9e884db2a65e2e9989"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.343805 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bqqp" event={"ID":"35ee9206-490f-4303-9ee7-198148cb3227","Type":"ContainerStarted","Data":"a3ff79df1f1d26be30d755dc04aa22f5812de66f157cd80c3edbfdf837a3a019"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.343861 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bqqp" event={"ID":"35ee9206-490f-4303-9ee7-198148cb3227","Type":"ContainerStarted","Data":"1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.356588 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60886: no serving certificate available for the kubelet" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.359605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" event={"ID":"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825","Type":"ContainerStarted","Data":"4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad"} Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.365798 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kndt2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.365861 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.369172 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.370815 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:49.870739554 +0000 UTC m=+213.999868191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.375015 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.377860 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.381874 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4bqqp" podStartSLOduration=164.381844688 podStartE2EDuration="2m44.381844688s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:49.381578141 +0000 UTC m=+213.510706608" watchObservedRunningTime="2026-03-18 14:03:49.381844688 +0000 UTC m=+213.510973145" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.578675 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.579610 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.079569712 +0000 UTC m=+214.208698169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.589325 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60892: no serving certificate available for the kubelet" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.677272 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60908: no serving certificate available for the kubelet" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.692793 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.693374 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.193353727 +0000 UTC m=+214.322482184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.794471 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60910: no serving certificate available for the kubelet" Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.807725 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.808184 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.30814578 +0000 UTC m=+214.437274227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.909331 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:49 crc kubenswrapper[4857]: E0318 14:03:49.909916 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.409898727 +0000 UTC m=+214.539027184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:49 crc kubenswrapper[4857]: I0318 14:03:49.946663 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60914: no serving certificate available for the kubelet" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.105346 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.105828 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.60580523 +0000 UTC m=+214.734933688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.155674 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60916: no serving certificate available for the kubelet" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.385887 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.386318 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:50.88630486 +0000 UTC m=+215.015433317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.420493 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" event={"ID":"6386751b-4de0-4258-aa09-d4cf545db8b1","Type":"ContainerStarted","Data":"ef6cc5dca96a40d006423de311934c4127de92cbe6c4fa3e18e717755c952a99"} Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.429920 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" event={"ID":"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825","Type":"ContainerStarted","Data":"4e3088ed0528fc9d50a08ac061c0a4e2c3cbfa7deb926a3d8e87ceab8021f9ec"} Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.431043 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:50 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:50 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:50 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.431075 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.547657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" event={"ID":"d2aa0233-e26e-477a-adb9-6b281555b255","Type":"ContainerStarted","Data":"bbee29135dc3ce58479f52fd597e39d11dfb325f355bc5bd3aef42def76bcd56"} Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.552992 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.553108 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.053082637 +0000 UTC m=+215.182211094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.553169 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.553657 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.053649282 +0000 UTC m=+215.182777739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.566733 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" podStartSLOduration=165.56671257 podStartE2EDuration="2m45.56671257s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:50.566507704 +0000 UTC m=+214.695636161" watchObservedRunningTime="2026-03-18 14:03:50.56671257 +0000 UTC m=+214.695841027" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.708576 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60932: no serving certificate available for the kubelet" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.709217 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.709450 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" event={"ID":"2010b90c-be36-487d-8050-071bac0d5600","Type":"ContainerStarted","Data":"809c888f833a553440a459e9f47e36a112159c574869a2d997c47415954f13da"} Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.710060 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.210039044 +0000 UTC m=+215.339167501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.710838 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" podStartSLOduration=164.710817586 podStartE2EDuration="2m44.710817586s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:50.70952872 +0000 UTC m=+214.838657177" watchObservedRunningTime="2026-03-18 14:03:50.710817586 +0000 UTC m=+214.839946043" Mar 18 14:03:50 crc kubenswrapper[4857]: I0318 14:03:50.816699 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:50 crc kubenswrapper[4857]: E0318 14:03:50.817327 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.317308291 +0000 UTC m=+215.446436748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:50.992525 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:50.993057 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.493038123 +0000 UTC m=+215.622166580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.005404 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" event={"ID":"8ad51d9d-dcd1-467e-9aa6-162d19c035ed","Type":"ContainerStarted","Data":"1f63aefa15bfd32c6e0413b7646b41031ec9ec2b0ba15c783c0bca7d09de4af6"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.006505 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.009669 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" event={"ID":"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25","Type":"ContainerStarted","Data":"6469158449f2e0d76ffbc03d5bcf16c71ff5f0dccfa0b6ae53d7b5ae31860043"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.010488 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" event={"ID":"a977ae9e-847e-402e-ba1f-b716811ee998","Type":"ContainerStarted","Data":"96505c502e797dde87d56482acc58404e44850d666e9d9a2bc056a19d35a4506"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.011500 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" event={"ID":"95c8f32c-92c3-41f9-b4ca-e7ca90c22845","Type":"ContainerStarted","Data":"7a764559cc86fbfb85ca61279ae6e1a3b38f73b3ba718444ef3e9402d106619f"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.012375 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" event={"ID":"7aa2da99-50b5-4d4f-aa55-b4507cd134be","Type":"ContainerStarted","Data":"91ebf1307fcad37d9326cdc2fca0b90c967522a95f5122a770c4f22bbcc0535b"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.013269 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jkjbk" event={"ID":"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac","Type":"ContainerStarted","Data":"f8df4c192dd62c6b99b67f113b304d13803a5821811f48581441d4a484ae0970"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.014743 4857 generic.go:334] "Generic (PLEG): container finished" podID="b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d" containerID="c8d4a53ee4fbcee5323b80338c58444434f3528625d8d1e6ef7ffd311eb1c6d4" exitCode=0 Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.014824 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" event={"ID":"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d","Type":"ContainerDied","Data":"c8d4a53ee4fbcee5323b80338c58444434f3528625d8d1e6ef7ffd311eb1c6d4"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.016913 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" event={"ID":"0ea4eb52-a889-4cec-8511-f1ef21cc732f","Type":"ContainerStarted","Data":"cb1b47b22ade7ae0f21ad9965a78ed7570a6c8f541467102cd2ec43d6a5d5634"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.018364 4857 generic.go:334] "Generic (PLEG): container finished" podID="d4300327-af6f-4261-8973-ef640d24993f" containerID="06376925f6103ebc233468e08542d00379fce9520312bafaf7605155213c0c84" exitCode=0 Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.018412 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" event={"ID":"d4300327-af6f-4261-8973-ef640d24993f","Type":"ContainerDied","Data":"06376925f6103ebc233468e08542d00379fce9520312bafaf7605155213c0c84"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.020130 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r69mf" event={"ID":"95af33b4-74c3-4fbf-8286-bc021087c17c","Type":"ContainerStarted","Data":"3b2d4fdc2471f0ed47affbd2333dd899a2ce2eb729d05fca5769ded92cdfa151"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.021396 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" event={"ID":"5059db9d-7a66-401f-939c-e94b2bd2eff9","Type":"ContainerStarted","Data":"7c2d78f2d799645d429a7d1d60b3df72cce62c6c63a9c212e5b185df295ecdd5"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.033031 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.033103 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.093914 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:51.094824 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.5948102 +0000 UTC m=+215.723938647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.101426 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.335545 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.335628 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.338192 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:51.342227 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.842188793 +0000 UTC m=+215.971317250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.405005 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:51 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:51 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:51 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.405065 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.421713 4857 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5t7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.424470 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.453141 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" event={"ID":"5416713e-365f-40d0-b5b5-57e570feaf91","Type":"ContainerStarted","Data":"92cadd71da61043e879df420dd9ef9539513b09e5cb84126d7711e2a9da88688"} Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.454267 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:51.456438 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:51.95641921 +0000 UTC m=+216.085547667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.463077 4857 ???:1] "http: TLS handshake error from 192.168.126.11:60936: no serving certificate available for the kubelet" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.462639 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podStartSLOduration=165.462586289 podStartE2EDuration="2m45.462586289s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:51.101694648 +0000 UTC m=+215.230823115" watchObservedRunningTime="2026-03-18 14:03:51.462586289 +0000 UTC m=+215.591714746" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.506269 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" podStartSLOduration=165.506238074 podStartE2EDuration="2m45.506238074s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:51.48819314 +0000 UTC m=+215.617321617" watchObservedRunningTime="2026-03-18 14:03:51.506238074 +0000 UTC m=+215.635366531" Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.680443 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:51.680666 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.180635079 +0000 UTC m=+216.309763606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.681079 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:51 crc kubenswrapper[4857]: E0318 14:03:51.683666 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.183651042 +0000 UTC m=+216.312779499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:51 crc kubenswrapper[4857]: I0318 14:03:51.852977 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:51.853519 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.353487122 +0000 UTC m=+216.482615579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:51.861054 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gvkpz" podStartSLOduration=166.861027199 podStartE2EDuration="2m46.861027199s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:51.857961825 +0000 UTC m=+215.987090282" watchObservedRunningTime="2026-03-18 14:03:51.861027199 +0000 UTC m=+215.990155656" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.000952 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.001408 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.501391512 +0000 UTC m=+216.630519979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.148446 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.148730 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.648703155 +0000 UTC m=+216.777831612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.149023 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.149900 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.649886728 +0000 UTC m=+216.779015185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.251149 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.251441 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.751421038 +0000 UTC m=+216.880549495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.251540 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.252028 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.752019854 +0000 UTC m=+216.881148311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.398027 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.398353 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:52.89833676 +0000 UTC m=+217.027465217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.523015 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.523815 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.023802305 +0000 UTC m=+217.152930762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.524032 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.524057 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.524224 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.524247 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.530306 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.531872 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.534206 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:52 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:52 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:52 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.534290 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.538218 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.680917 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.682237 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.182218972 +0000 UTC m=+217.311347429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.682306 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.682330 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.682332 4857 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5t7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.682360 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.682877 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.684837 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.184816463 +0000 UTC m=+217.313944990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.685785 4857 patch_prober.go:28] interesting pod/console-f9d7485db-4bqqp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.685895 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bqqp" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.838288 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:52 crc kubenswrapper[4857]: E0318 14:03:52.838802 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.338778409 +0000 UTC m=+217.467906866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.855401 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" event={"ID":"e2a45c6f-d5f7-49f7-9cf2-034bb7ea0b25","Type":"ContainerStarted","Data":"bca2103e247471c05ca95434abd63745fe97f30838374a457fd3488e0ea17a85"} Mar 18 14:03:52 crc kubenswrapper[4857]: I0318 14:03:52.887450 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" event={"ID":"6386751b-4de0-4258-aa09-d4cf545db8b1","Type":"ContainerStarted","Data":"0a09f940e49c48997981c3cae05aa312150947b36d06b454e516e37b31702348"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.043252 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.043308 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.043356 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.043483 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.043540 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.046097 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.546082275 +0000 UTC m=+217.675210732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.070285 4857 ???:1] "http: TLS handshake error from 192.168.126.11:56074: no serving certificate available for the kubelet" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.157668 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.159386 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bssd" event={"ID":"f5a03aa3-6810-477f-8f45-79abe51d7d7e","Type":"ContainerStarted","Data":"f6ea1c533dfc6f9a90d2b17d89540f7feeebcb04ae1a916cbe0bef92323ddf93"} Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.160290 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.660257081 +0000 UTC m=+217.789385538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.260878 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.275819 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.775796434 +0000 UTC m=+217.904924891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.380108 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.380530 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.880508901 +0000 UTC m=+218.009637358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.394558 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-v6brz" podStartSLOduration=167.394536375 podStartE2EDuration="2m47.394536375s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:53.392841549 +0000 UTC m=+217.521970006" watchObservedRunningTime="2026-03-18 14:03:53.394536375 +0000 UTC m=+217.523664832" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.443229 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:53 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:53 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:53 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.443312 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.444227 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.444252 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.444345 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.444408 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.498914 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.499391 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:53.999368425 +0000 UTC m=+218.128496892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.515859 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.515928 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.521013 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.521079 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.605537 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.605960 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:54.105946174 +0000 UTC m=+218.235074631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.655840 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podStartSLOduration=167.655823799 podStartE2EDuration="2m47.655823799s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:53.654226966 +0000 UTC m=+217.783355423" watchObservedRunningTime="2026-03-18 14:03:53.655823799 +0000 UTC m=+217.784952256" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.706944 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.709261 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:54.209242442 +0000 UTC m=+218.338370899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.809906 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.814923 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:54.314894925 +0000 UTC m=+218.444023382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.832019 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.832081 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.917010 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:53 crc kubenswrapper[4857]: E0318 14:03:53.917317 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:54.417304439 +0000 UTC m=+218.546432896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942070 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942107 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-drmwr" event={"ID":"f8a3a7a9-a253-480c-b074-485bc5768d8c","Type":"ContainerStarted","Data":"69ced68d49624356979a8b3cced954a20784d28b4838e507f94408fe5647d64c"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942132 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" event={"ID":"189dc2a2-def0-41c0-9a6d-044db219385c","Type":"ContainerStarted","Data":"b86b2af75c0ecf4ea468d0bcc3054f59eb142c1b035d8cd9409dea009d362d7c"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942151 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942219 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942236 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerStarted","Data":"4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942248 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" event={"ID":"3387b870-2054-4e0f-97b6-4af4f37bf34d","Type":"ContainerStarted","Data":"a97d665e87dca706b2c7c7dfdea0091b04fee35c6af3d47ca266f428853c7d27"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942262 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" event={"ID":"0978ab58-ab3c-4265-8674-c2572b9b47b6","Type":"ContainerStarted","Data":"7d0ab957a2c1321cbe02944359d9b2e58adb7544cd747769fd980d124c5fb559"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942284 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r69mf" event={"ID":"95af33b4-74c3-4fbf-8286-bc021087c17c","Type":"ContainerStarted","Data":"eb08722d72f5198aae30a5f33304fa8dd4d33720a49fa78d8dfbe304e22b601a"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942296 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" event={"ID":"2dfd5f25-d490-4570-86ed-bf436c585658","Type":"ContainerStarted","Data":"543800aa3fcb069f6a68e788f3d01fc4ab54bb9bb07b54af1c09567f4066a5cd"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942309 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" event={"ID":"ec0b6656-7dff-430f-b121-5bbbc7bc8fc9","Type":"ContainerStarted","Data":"d4a44d7b92e3fd42d578d2a289a61e272d030f8ba69957479a5ecbc7f051bbe8"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942321 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" event={"ID":"308e1e78-75ec-431a-82a9-09437cccd9c9","Type":"ContainerStarted","Data":"29e08ab2e2c65ddc9c8a538982811dd17af5c75e2aa1d187f7869c5f25ac466b"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.942334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" event={"ID":"5416713e-365f-40d0-b5b5-57e570feaf91","Type":"ContainerStarted","Data":"ed3e8a9782c017055c682ec8f03dd9c3b336c630a8c3b0be12760fc8a9d00b21"} Mar 18 14:03:53 crc kubenswrapper[4857]: I0318 14:03:53.966223 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-r69mf" podStartSLOduration=13.966184557 podStartE2EDuration="13.966184557s" podCreationTimestamp="2026-03-18 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:53.923538069 +0000 UTC m=+218.052666526" watchObservedRunningTime="2026-03-18 14:03:53.966184557 +0000 UTC m=+218.095313014" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.052230 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:54 crc kubenswrapper[4857]: E0318 14:03:54.052355 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:54.552331886 +0000 UTC m=+218.681460343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.190390 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mbv77" podStartSLOduration=169.190352915 podStartE2EDuration="2m49.190352915s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:54.070175604 +0000 UTC m=+218.199304061" watchObservedRunningTime="2026-03-18 14:03:54.190352915 +0000 UTC m=+218.319481372" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.199458 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kx9ws" podStartSLOduration=168.199425843 podStartE2EDuration="2m48.199425843s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:54.19237108 +0000 UTC m=+218.321499537" watchObservedRunningTime="2026-03-18 14:03:54.199425843 +0000 UTC m=+218.328554300" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.300992 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:54 crc kubenswrapper[4857]: E0318 14:03:54.506462 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.006426219 +0000 UTC m=+219.135554676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.512374 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:54 crc kubenswrapper[4857]: E0318 14:03:54.517375 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.017346718 +0000 UTC m=+219.146475175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.518405 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qrcrr" podStartSLOduration=168.518371076 podStartE2EDuration="2m48.518371076s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:54.508248489 +0000 UTC m=+218.637376956" watchObservedRunningTime="2026-03-18 14:03:54.518371076 +0000 UTC m=+218.647499533" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.592183 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:54 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:54 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:54 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.592292 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.618243 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:54 crc kubenswrapper[4857]: E0318 14:03:54.618662 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.118648142 +0000 UTC m=+219.247776599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.660324 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-7fk6f" podStartSLOduration=168.660294362 podStartE2EDuration="2m48.660294362s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:54.617555702 +0000 UTC m=+218.746684159" watchObservedRunningTime="2026-03-18 14:03:54.660294362 +0000 UTC m=+218.789422829" Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.800474 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:54 crc kubenswrapper[4857]: E0318 14:03:54.802256 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.302234558 +0000 UTC m=+219.431363015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:54 crc kubenswrapper[4857]: I0318 14:03:54.874127 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jkjbk" event={"ID":"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac","Type":"ContainerStarted","Data":"6f48aa022d72c431c61c2b2cc89bd9c797838230f8b29e50f4b9ef1d796c263a"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.038539 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.047680 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.547662058 +0000 UTC m=+219.676790515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.055710 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" event={"ID":"d2aa0233-e26e-477a-adb9-6b281555b255","Type":"ContainerStarted","Data":"babde03eaea525f11b2af769e1c9bd1c60623fd793bc586cb93c19e627f47c7c"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.160022 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.160738 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.660714374 +0000 UTC m=+219.789842831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.322277 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.323377 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.823362157 +0000 UTC m=+219.952490614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.325657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" event={"ID":"7aa2da99-50b5-4d4f-aa55-b4507cd134be","Type":"ContainerStarted","Data":"993a29a4ffe6e6ad68bdae4729c3fba195eed0b7d2a8dd672d4978ec3962a26a"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.325701 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" event={"ID":"6386751b-4de0-4258-aa09-d4cf545db8b1","Type":"ContainerStarted","Data":"6fc1501222fcf2cd2e3b3af7ca2d4e4b9d5862e54fcae4d37e8268e80a8f97a6"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.327414 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:55 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:55 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:55 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.327459 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.460563 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.462048 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:55.962014263 +0000 UTC m=+220.091142720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.465022 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.476919 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.477244 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.487045 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" event={"ID":"5059db9d-7a66-401f-939c-e94b2bd2eff9","Type":"ContainerStarted","Data":"dd32e25f8dd6fbbe21fce1963d9c891794e6bdfa6f06cff1ca32557806ab7b66"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.606048 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.606550 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.10653495 +0000 UTC m=+220.235663407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.627970 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prb9h" podStartSLOduration=169.627940416 podStartE2EDuration="2m49.627940416s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:55.487934703 +0000 UTC m=+219.617063160" watchObservedRunningTime="2026-03-18 14:03:55.627940416 +0000 UTC m=+219.757068883" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.633317 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" event={"ID":"de2b9698-9e6e-4f2f-a21f-178b3e8cb7f1","Type":"ContainerStarted","Data":"d4f32fd4d1794775983bf6bb3cd1b7f97fc0285121c5f4f6e139a64c73f4448c"} Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.633923 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.633969 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.636532 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.636581 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.644642 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.661765 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlmdz" podStartSLOduration=169.661718941 podStartE2EDuration="2m49.661718941s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:55.628933723 +0000 UTC m=+219.758062180" watchObservedRunningTime="2026-03-18 14:03:55.661718941 +0000 UTC m=+219.790847398" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.662806 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podStartSLOduration=169.662798391 podStartE2EDuration="2m49.662798391s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:55.661564127 +0000 UTC m=+219.790692584" watchObservedRunningTime="2026-03-18 14:03:55.662798391 +0000 UTC m=+219.791926848" Mar 18 14:03:55 crc kubenswrapper[4857]: I0318 14:03:55.803065 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:55 crc kubenswrapper[4857]: E0318 14:03:55.808534 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.30849476 +0000 UTC m=+220.437623217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:55.910920 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:55.912293 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.412272671 +0000 UTC m=+220.541401208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.049246 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.049646 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.549629141 +0000 UTC m=+220.678757598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.061054 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mswzz" podStartSLOduration=170.061035254 podStartE2EDuration="2m50.061035254s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:55.901805035 +0000 UTC m=+220.030933482" watchObservedRunningTime="2026-03-18 14:03:56.061035254 +0000 UTC m=+220.190163711" Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.170441 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.170773 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.670744347 +0000 UTC m=+220.799872804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.181123 4857 ???:1] "http: TLS handshake error from 192.168.126.11:56086: no serving certificate available for the kubelet" Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.201700 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" podStartSLOduration=170.201679034 podStartE2EDuration="2m50.201679034s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:56.186343024 +0000 UTC m=+220.315471481" watchObservedRunningTime="2026-03-18 14:03:56.201679034 +0000 UTC m=+220.330807491" Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.272448 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.272661 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.772621457 +0000 UTC m=+220.901749924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.272960 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.273379 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.773364667 +0000 UTC m=+220.902493124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.314701 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:56 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:56 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:56 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.314829 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.374028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.374255 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.874221629 +0000 UTC m=+221.003350086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.374584 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.375200 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.875186975 +0000 UTC m=+221.004315422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.479091 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.479650 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:56.979606684 +0000 UTC m=+221.108735141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.742100 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.742621 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.242603415 +0000 UTC m=+221.371731872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.843031 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" event={"ID":"95c8f32c-92c3-41f9-b4ca-e7ca90c22845","Type":"ContainerStarted","Data":"9ce70ff6da4906775810e462469fe9c4ee1f6ed06edc486ab994854e8e3a0ffe"} Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.843070 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.843668 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.343647851 +0000 UTC m=+221.472776308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.850109 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" event={"ID":"308e1e78-75ec-431a-82a9-09437cccd9c9","Type":"ContainerStarted","Data":"ee9ab536ad369e4d337fd221f7b209c8e0b750450bdce9e9f7c823428eb14eec"} Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.968997 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:56 crc kubenswrapper[4857]: E0318 14:03:56.969465 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.469445626 +0000 UTC m=+221.598574093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.975058 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" event={"ID":"2010b90c-be36-487d-8050-071bac0d5600","Type":"ContainerStarted","Data":"f348c2476d653298129f991e56be2daa286c272fadd27f457bb8450c2ee73ae1"} Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.978399 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" event={"ID":"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d","Type":"ContainerStarted","Data":"e928f858956509d78a1cef041ac31d55df2dd5a03f01387b9e4a02a136767670"} Mar 18 14:03:56 crc kubenswrapper[4857]: I0318 14:03:56.981347 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" event={"ID":"d4300327-af6f-4261-8973-ef640d24993f","Type":"ContainerStarted","Data":"a21471c5f59b047563f7738ebb7769be0d67da500d562a72c41d6aa7633e5a7d"} Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.987392 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" event={"ID":"d2aa0233-e26e-477a-adb9-6b281555b255","Type":"ContainerStarted","Data":"caef1f058eb721f06b0b8c4e176a7d6041ddc1c103dfe7f18f11b7f718c30210"} Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.988550 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.990164 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" event={"ID":"a977ae9e-847e-402e-ba1f-b716811ee998","Type":"ContainerStarted","Data":"fb5726fb773295326e43eca6d1f35c99f7505b6ac2dbc2397c2c9c85e3814778"} Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.991413 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.991458 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.992619 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qdffj" event={"ID":"0ea4eb52-a889-4cec-8511-f1ef21cc732f","Type":"ContainerStarted","Data":"256256be5125f8225744fa0d1ca56e109f4316f34dc2544de4215aa7d0b77479"} Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:56.995982 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" event={"ID":"7aa2da99-50b5-4d4f-aa55-b4507cd134be","Type":"ContainerStarted","Data":"529a15eca6523c8a816d2637d6a48371f3d55bf8f212f49f0a21efb981923a56"} Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.039467 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.039577 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.111247 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.117206 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.61716946 +0000 UTC m=+221.746297917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.118035 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.120533 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.620516892 +0000 UTC m=+221.749645349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.138778 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xsbrw" podStartSLOduration=171.138746061 podStartE2EDuration="2m51.138746061s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:57.137549298 +0000 UTC m=+221.266677765" watchObservedRunningTime="2026-03-18 14:03:57.138746061 +0000 UTC m=+221.267874518" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.240502 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.242558 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.742540003 +0000 UTC m=+221.871668460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.325740 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:57 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:57 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:57 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.325830 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.342880 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.343491 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:57.843473057 +0000 UTC m=+221.972601514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.420804 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podStartSLOduration=171.420779293 podStartE2EDuration="2m51.420779293s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:57.420462715 +0000 UTC m=+221.549591172" watchObservedRunningTime="2026-03-18 14:03:57.420779293 +0000 UTC m=+221.549907750" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.516487 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.516888 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.016868524 +0000 UTC m=+222.145996971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.551461 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.554929 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" podStartSLOduration=171.554906776 podStartE2EDuration="2m51.554906776s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:57.551068031 +0000 UTC m=+221.680196508" watchObservedRunningTime="2026-03-18 14:03:57.554906776 +0000 UTC m=+221.684035233" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.617690 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.618238 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.11822318 +0000 UTC m=+222.247351637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.673724 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hl9jv" podStartSLOduration=171.673698268 podStartE2EDuration="2m51.673698268s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:57.640276523 +0000 UTC m=+221.769404980" watchObservedRunningTime="2026-03-18 14:03:57.673698268 +0000 UTC m=+221.802826725" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.768459 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.768933 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.268903385 +0000 UTC m=+222.398031842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.769086 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.769944 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.269928393 +0000 UTC m=+222.399056890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.770198 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.770380 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.772703 4857 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-dnrd6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.772776 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" podUID="d4300327-af6f-4261-8973-ef640d24993f" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.870115 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.870580 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.370560758 +0000 UTC m=+222.499689215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.894317 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-r5wln" podStartSLOduration=171.894294098 podStartE2EDuration="2m51.894294098s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:03:57.886160625 +0000 UTC m=+222.015289082" watchObservedRunningTime="2026-03-18 14:03:57.894294098 +0000 UTC m=+222.023422555" Mar 18 14:03:57 crc kubenswrapper[4857]: I0318 14:03:57.972048 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:57 crc kubenswrapper[4857]: E0318 14:03:57.972458 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.472442068 +0000 UTC m=+222.601570515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.112786 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.113260 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.613232133 +0000 UTC m=+222.742360590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.143325 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" event={"ID":"2010b90c-be36-487d-8050-071bac0d5600","Type":"ContainerStarted","Data":"935b37d336629aefdab927eafc898257f5b8465137f1471fddcab2572f672ab1"} Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.214888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.215616 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.715590965 +0000 UTC m=+222.844719422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.243708 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jkjbk" event={"ID":"3e468ef1-f1e5-4e48-bf3d-6a7f60cda4ac","Type":"ContainerStarted","Data":"a55cfdf95452105d2a17f6a53036601a39f16deb0f29fffdf4173f3c712fcba3"} Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.244787 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jkjbk" Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.318089 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.322768 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" event={"ID":"b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d","Type":"ContainerStarted","Data":"afce932c64c1749a675e6661ab23eff5059b76fbca65922f55c7c28fd297f9a8"} Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.330261 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.830210414 +0000 UTC m=+222.959338881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.332284 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.332389 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.357592 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:58 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:58 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:58 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.357762 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.421298 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.425377 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:58.925347219 +0000 UTC m=+223.054475676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.524190 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.524473 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.024433692 +0000 UTC m=+223.153562149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.524859 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.525296 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.025278105 +0000 UTC m=+223.154406562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.627412 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.627858 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.127840803 +0000 UTC m=+223.256969260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.728875 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.729431 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.229406654 +0000 UTC m=+223.358535111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.832433 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.832902 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.332880927 +0000 UTC m=+223.462009384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:58 crc kubenswrapper[4857]: I0318 14:03:58.998008 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:58 crc kubenswrapper[4857]: E0318 14:03:58.998522 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.498504892 +0000 UTC m=+223.627633349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.153265 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.153688 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.65366964 +0000 UTC m=+223.782798097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.475361 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.476032 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:03:59.975997965 +0000 UTC m=+224.105126562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.528842 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:03:59 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:03:59 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:03:59 crc kubenswrapper[4857]: healthz check failed Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.529198 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.607096 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.608419 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.108391901 +0000 UTC m=+224.237520358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.708432 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.708859 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.20883368 +0000 UTC m=+224.337962137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.809080 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.809827 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.309799924 +0000 UTC m=+224.438928371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:03:59 crc kubenswrapper[4857]: I0318 14:03:59.910934 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:03:59 crc kubenswrapper[4857]: E0318 14:03:59.911880 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.411863569 +0000 UTC m=+224.540992026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.089686 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.090126 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.590104339 +0000 UTC m=+224.719232796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.322365 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.322844 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.822820221 +0000 UTC m=+224.951948758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.332199 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:00 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:00 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:00 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.332288 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.427503 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.427776 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:00.927760694 +0000 UTC m=+225.056889151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.605186 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.606061 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.106043495 +0000 UTC m=+225.235171962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.618657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" event={"ID":"189dc2a2-def0-41c0-9a6d-044db219385c","Type":"ContainerStarted","Data":"face485b37e335af4e4db7a217d6b0e74885d0f556e104eae78a9a8cc58ac08a"} Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.624242 4857 generic.go:334] "Generic (PLEG): container finished" podID="d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" containerID="4e3088ed0528fc9d50a08ac061c0a4e2c3cbfa7deb926a3d8e87ceab8021f9ec" exitCode=0 Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.625030 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" event={"ID":"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825","Type":"ContainerDied","Data":"4e3088ed0528fc9d50a08ac061c0a4e2c3cbfa7deb926a3d8e87ceab8021f9ec"} Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.745550 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.746278 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.246256894 +0000 UTC m=+225.375385351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.870046 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.893455 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564044-5r7zc"] Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.895101 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.896075 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" podStartSLOduration=175.896048216 podStartE2EDuration="2m55.896048216s" podCreationTimestamp="2026-03-18 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:00.827602052 +0000 UTC m=+224.956730509" watchObservedRunningTime="2026-03-18 14:04:00.896048216 +0000 UTC m=+225.025176673" Mar 18 14:04:00 crc kubenswrapper[4857]: E0318 14:04:00.896494 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.396475347 +0000 UTC m=+225.525603804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.911408 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:04:00 crc kubenswrapper[4857]: I0318 14:04:00.912688 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564044-5r7zc"] Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.006819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.007020 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7hz\" (UniqueName: \"kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz\") pod \"auto-csr-approver-29564044-5r7zc\" (UID: \"af5933af-d25b-4d7a-8fda-e95c340a38ac\") " pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.007234 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.507212269 +0000 UTC m=+225.636340726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.102220 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wp82x" podStartSLOduration=175.102090627 podStartE2EDuration="2m55.102090627s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:01.100274447 +0000 UTC m=+225.229402904" watchObservedRunningTime="2026-03-18 14:04:01.102090627 +0000 UTC m=+225.231219084" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.241061 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.241107 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch7hz\" (UniqueName: \"kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz\") pod \"auto-csr-approver-29564044-5r7zc\" (UID: \"af5933af-d25b-4d7a-8fda-e95c340a38ac\") " pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.242206 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.742184613 +0000 UTC m=+225.871313070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.327036 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:01 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:01 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:01 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.327136 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.365165 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.365663 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.865640423 +0000 UTC m=+225.994768870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.470536 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.471255 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:01.971233814 +0000 UTC m=+226.100362271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.516792 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch7hz\" (UniqueName: \"kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz\") pod \"auto-csr-approver-29564044-5r7zc\" (UID: \"af5933af-d25b-4d7a-8fda-e95c340a38ac\") " pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.534910 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jkjbk" podStartSLOduration=21.534887387 podStartE2EDuration="21.534887387s" podCreationTimestamp="2026-03-18 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:01.532167983 +0000 UTC m=+225.661296440" watchObservedRunningTime="2026-03-18 14:04:01.534887387 +0000 UTC m=+225.664015844" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.572503 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.573631 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.073604747 +0000 UTC m=+226.202733204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.583886 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.677128 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.677710 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.177690797 +0000 UTC m=+226.306819254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.761577 4857 ???:1] "http: TLS handshake error from 192.168.126.11:56094: no serving certificate available for the kubelet" Mar 18 14:04:01 crc kubenswrapper[4857]: I0318 14:04:01.964260 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:01 crc kubenswrapper[4857]: E0318 14:04:01.965999 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.4659688 +0000 UTC m=+226.595097247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.197676 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.198169 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.698153868 +0000 UTC m=+226.827282325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.403045 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.403466 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.903424478 +0000 UTC m=+227.032552935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.403710 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.404661 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:02.904632451 +0000 UTC m=+227.033760908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.410894 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:02 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:02 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:02 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.410973 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.568294 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.568634 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.068605001 +0000 UTC m=+227.197733458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.568707 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.569134 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.069108324 +0000 UTC m=+227.198236811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.589444 4857 patch_prober.go:28] interesting pod/console-f9d7485db-4bqqp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.589517 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bqqp" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.760255 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.760350 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.760417 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.760561 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.260536596 +0000 UTC m=+227.389665053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.761053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.761455 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.261443771 +0000 UTC m=+227.390572228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.762161 4857 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qr84c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.762196 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" podUID="b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.862856 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:02 crc kubenswrapper[4857]: E0318 14:04:02.864305 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.364282496 +0000 UTC m=+227.493410953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:02 crc kubenswrapper[4857]: I0318 14:04:02.901331 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" event={"ID":"189dc2a2-def0-41c0-9a6d-044db219385c","Type":"ContainerStarted","Data":"34ab638be9af5c0850336bc5741a03a2b1ca3fcd11f469c9d91c574ff31622f9"} Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.051104 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.051492 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.551479562 +0000 UTC m=+227.680608009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.052271 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.052315 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.052340 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.052397 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.151955 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.152186 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.652153798 +0000 UTC m=+227.781282255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.152284 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.152567 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.152806 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" containerID="cri-o://6dae5536d3c6a0bf2f54814dc3271dabd1cd8f0c51eed186d35de70f725f7bcb" gracePeriod=30 Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.152995 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.652985401 +0000 UTC m=+227.782113858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.257017 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.257401 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.757384039 +0000 UTC m=+227.886512496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.468785 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.469765 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:03.969725383 +0000 UTC m=+228.098853840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.475129 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:03 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:03 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:03 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.475200 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.501254 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 14:04:03 crc kubenswrapper[4857]: I0318 14:04:03.630870 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:03 crc kubenswrapper[4857]: E0318 14:04:03.632069 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:04.132052567 +0000 UTC m=+228.261181024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:04 crc kubenswrapper[4857]: I0318 14:04:03.921360 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:04 crc kubenswrapper[4857]: E0318 14:04:03.921860 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:04.421842262 +0000 UTC m=+228.550970719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.035653 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.036127 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:04.53610556 +0000 UTC m=+228.665234027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.137694 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.140044 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:04.640029756 +0000 UTC m=+228.769158213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.144019 4857 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.406285 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.406919 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:04.906895293 +0000 UTC m=+229.036023750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.420581 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:05 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:05 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:05 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.420639 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.495840 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.495929 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.496291 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerName="route-controller-manager" containerID="cri-o://d7a9cb313bec31ca1e4b82f6e74b4c77c481d56c872b55b051425f2e6186ecdb" gracePeriod=30 Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.497876 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.506062 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.508153 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.508202 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.508242 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrs5p\" (UniqueName: \"kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.508291 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.508663 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.008648099 +0000 UTC m=+229.137776626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.508928 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.609444 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.609825 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.609877 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.609940 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrs5p\" (UniqueName: \"kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.611546 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.611583 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.611658 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.111636668 +0000 UTC m=+229.240765205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.618853 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.619936 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.621923 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.632380 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.638397 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.653460 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.701016 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrs5p\" (UniqueName: \"kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p\") pod \"community-operators-cghkz\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.705365 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.708858 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.708970 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.710280 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jkjbk" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.711779 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.712298 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.212280114 +0000 UTC m=+229.341408571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.713445 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.763922 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.764742 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.819024 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.819744 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.819811 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb4f\" (UniqueName: \"kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.819870 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.819923 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820020 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820056 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jht5l\" (UniqueName: \"kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820175 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggc8\" (UniqueName: \"kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820285 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820324 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.820350 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.820490 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.320473236 +0000 UTC m=+229.449601693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.830437 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.830742 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924316 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924372 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggc8\" (UniqueName: \"kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924417 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924456 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924491 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924525 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924559 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb4f\" (UniqueName: \"kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924613 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924662 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924699 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924746 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.924790 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jht5l\" (UniqueName: \"kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.926290 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.927731 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:04.928377 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.42835837 +0000 UTC m=+229.557486827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.930741 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.930827 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.934035 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.934081 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.936979 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.936988 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.937029 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.941125 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:04.953682 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.025643 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:05.026273 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.526247941 +0000 UTC m=+229.655376398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.037427 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" event={"ID":"189dc2a2-def0-41c0-9a6d-044db219385c","Type":"ContainerStarted","Data":"89a3b31578cd21df5826b5c1a9e2c33be98eb366a84cc52de4c5b1c039f177c0"} Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.124612 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.168851 4857 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-18T14:04:04.144052006Z","Handler":null,"Name":""} Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.170998 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:05.171878 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.671858307 +0000 UTC m=+229.800986774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.171999 4857 generic.go:334] "Generic (PLEG): container finished" podID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerID="6dae5536d3c6a0bf2f54814dc3271dabd1cd8f0c51eed186d35de70f725f7bcb" exitCode=0 Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.318480 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:05.319545 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.81951056 +0000 UTC m=+229.948639017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.335002 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:05 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:05 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:05 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.335067 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.408364 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.415774 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb4f\" (UniqueName: \"kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f\") pod \"certified-operators-hzfl4\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.421968 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:05.423919 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-18 14:04:05.923901358 +0000 UTC m=+230.053029815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fh2dj" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.425675 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jht5l\" (UniqueName: \"kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l\") pod \"certified-operators-q8pg8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.436981 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggc8\" (UniqueName: \"kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8\") pod \"community-operators-dz4vq\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.626718 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:05 crc kubenswrapper[4857]: E0318 14:04:05.653733 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-18 14:04:06.153669189 +0000 UTC m=+230.282797646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.678859 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" podUID="189dc2a2-def0-41c0-9a6d-044db219385c" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.41:9898/healthz\": dial tcp 10.217.0.41:9898: connect: connection refused" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.696594 4857 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.696640 4857 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.723359 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" podStartSLOduration=25.723336897 podStartE2EDuration="25.723336897s" podCreationTimestamp="2026-03-18 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:05.719711048 +0000 UTC m=+229.848839505" watchObservedRunningTime="2026-03-18 14:04:05.723336897 +0000 UTC m=+229.852465354" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.742934 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.749078 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.749124 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:05 crc kubenswrapper[4857]: I0318 14:04:05.889022 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" event={"ID":"1a8c5344-76bd-4d55-aab5-d1a100a5c08c","Type":"ContainerDied","Data":"6dae5536d3c6a0bf2f54814dc3271dabd1cd8f0c51eed186d35de70f725f7bcb"} Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.272548 4857 generic.go:334] "Generic (PLEG): container finished" podID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerID="d7a9cb313bec31ca1e4b82f6e74b4c77c481d56c872b55b051425f2e6186ecdb" exitCode=0 Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.273880 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" event={"ID":"867f36a7-afd9-4d67-a7d3-42f2ca67ac91","Type":"ContainerDied","Data":"d7a9cb313bec31ca1e4b82f6e74b4c77c481d56c872b55b051425f2e6186ecdb"} Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.313528 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.342500 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.380358 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.525928 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.545634 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.554061 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.554280 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.555426 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.572127 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.572251 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.572845 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.573238 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fh2dj\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.714023 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:06 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:06 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:06 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.714105 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.810220 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.811795 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.811935 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.853125 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.854650 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.875270 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.892173 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.904256 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913039 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913165 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913247 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913329 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913413 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.913467 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z842t\" (UniqueName: \"kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.916633 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.917237 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.929387 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 18 14:04:06 crc kubenswrapper[4857]: I0318 14:04:06.944810 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.024090 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.024654 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.025180 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.025247 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.025326 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z842t\" (UniqueName: \"kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.026924 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.031201 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.048195 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.251848 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z842t\" (UniqueName: \"kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t\") pod \"redhat-marketplace-l9sbh\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.284719 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.286890 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.373270 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.389647 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b585n\" (UniqueName: \"kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.389962 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.390110 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492544 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492696 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492774 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9rt\" (UniqueName: \"kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492834 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b585n\" (UniqueName: \"kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492889 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.492952 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.494669 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.788830 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.813663 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:07 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:07 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:07 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.813770 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.842369 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9rt\" (UniqueName: \"kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.842702 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.844846 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.845544 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.852166 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.852582 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.852779 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.854018 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.854174 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.857258 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:04:07 crc kubenswrapper[4857]: I0318 14:04:07.857926 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.155976 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b585n\" (UniqueName: \"kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n\") pod \"redhat-operators-2g48f\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.156790 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.364540 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.364655 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.364725 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rm8p\" (UniqueName: \"kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.399639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" event={"ID":"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825","Type":"ContainerDied","Data":"4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad"} Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.399696 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ce19625b4883e8c8612d6185dbdeb097985e58f0a6f81ff497f114dedd7d8ad" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.425589 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9rt\" (UniqueName: \"kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt\") pod \"redhat-marketplace-c89xj\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.467294 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.467377 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.467418 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rm8p\" (UniqueName: \"kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.468126 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:08 crc kubenswrapper[4857]: I0318 14:04:08.468361 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.269234 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.317514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rm8p\" (UniqueName: \"kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p\") pod \"redhat-operators-lmqk2\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.422160 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.424611 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.425912 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.543068 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" event={"ID":"1a8c5344-76bd-4d55-aab5-d1a100a5c08c","Type":"ContainerDied","Data":"923a3ab85ea9ab9c0f41ed098de986cfc93e762914e88451cb8e456f4c146d75"} Mar 18 14:04:09 crc kubenswrapper[4857]: I0318 14:04:09.543212 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="923a3ab85ea9ab9c0f41ed098de986cfc93e762914e88451cb8e456f4c146d75" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.246401 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.287358 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:10 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:10 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:10 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.287485 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.322066 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:10 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:10 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:10 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.322130 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.350601 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.354690 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.362844 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config\") pod \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.362945 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config\") pod \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363026 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles\") pod \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363114 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume\") pod \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363194 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca\") pod \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363228 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdsdd\" (UniqueName: \"kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd\") pod \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363275 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcf6v\" (UniqueName: \"kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v\") pod \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363311 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca\") pod \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363336 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert\") pod \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363364 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2x9s\" (UniqueName: \"kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s\") pod \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\" (UID: \"1a8c5344-76bd-4d55-aab5-d1a100a5c08c\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363405 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume\") pod \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\" (UID: \"d067c327-e7cb-4fbc-a54f-4ac7bd9c7825\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.363475 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert\") pod \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\" (UID: \"867f36a7-afd9-4d67-a7d3-42f2ca67ac91\") " Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.394632 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1a8c5344-76bd-4d55-aab5-d1a100a5c08c" (UID: "1a8c5344-76bd-4d55-aab5-d1a100a5c08c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.396499 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config" (OuterVolumeSpecName: "config") pod "1a8c5344-76bd-4d55-aab5-d1a100a5c08c" (UID: "1a8c5344-76bd-4d55-aab5-d1a100a5c08c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.397624 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config" (OuterVolumeSpecName: "config") pod "867f36a7-afd9-4d67-a7d3-42f2ca67ac91" (UID: "867f36a7-afd9-4d67-a7d3-42f2ca67ac91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.398588 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca" (OuterVolumeSpecName: "client-ca") pod "867f36a7-afd9-4d67-a7d3-42f2ca67ac91" (UID: "867f36a7-afd9-4d67-a7d3-42f2ca67ac91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.672151 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a8c5344-76bd-4d55-aab5-d1a100a5c08c" (UID: "1a8c5344-76bd-4d55-aab5-d1a100a5c08c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.677579 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.677620 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.677646 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.677659 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.677681 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.749443 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.750739 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj" event={"ID":"867f36a7-afd9-4d67-a7d3-42f2ca67ac91","Type":"ContainerDied","Data":"f84cbf52583b5e9127589e656ae3cad2952a54fc1dff25724f00e00a6edecaeb"} Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.751079 4857 scope.go:117] "RemoveContainer" containerID="d7a9cb313bec31ca1e4b82f6e74b4c77c481d56c872b55b051425f2e6186ecdb" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.751713 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5t7m" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.781023 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume" (OuterVolumeSpecName: "config-volume") pod "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" (UID: "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.786417 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.883315 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.906622 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a8c5344-76bd-4d55-aab5-d1a100a5c08c" (UID: "1a8c5344-76bd-4d55-aab5-d1a100a5c08c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.906659 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd" (OuterVolumeSpecName: "kube-api-access-kdsdd") pod "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" (UID: "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825"). InnerVolumeSpecName "kube-api-access-kdsdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.907182 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "867f36a7-afd9-4d67-a7d3-42f2ca67ac91" (UID: "867f36a7-afd9-4d67-a7d3-42f2ca67ac91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.907678 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" (UID: "d067c327-e7cb-4fbc-a54f-4ac7bd9c7825"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.907724 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v" (OuterVolumeSpecName: "kube-api-access-xcf6v") pod "867f36a7-afd9-4d67-a7d3-42f2ca67ac91" (UID: "867f36a7-afd9-4d67-a7d3-42f2ca67ac91"). InnerVolumeSpecName "kube-api-access-xcf6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.917691 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s" (OuterVolumeSpecName: "kube-api-access-j2x9s") pod "1a8c5344-76bd-4d55-aab5-d1a100a5c08c" (UID: "1a8c5344-76bd-4d55-aab5-d1a100a5c08c"). InnerVolumeSpecName "kube-api-access-j2x9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.965650 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564044-5r7zc"] Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997771 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997813 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdsdd\" (UniqueName: \"kubernetes.io/projected/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825-kube-api-access-kdsdd\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997826 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcf6v\" (UniqueName: \"kubernetes.io/projected/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-kube-api-access-xcf6v\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997838 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2x9s\" (UniqueName: \"kubernetes.io/projected/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-kube-api-access-j2x9s\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997850 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a8c5344-76bd-4d55-aab5-d1a100a5c08c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:10 crc kubenswrapper[4857]: I0318 14:04:10.997861 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/867f36a7-afd9-4d67-a7d3-42f2ca67ac91-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.575835 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.594741 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rd2fj"] Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.617593 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.637643 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5t7m"] Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.647010 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:11 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:11 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:11 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.647088 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.768116 4857 ???:1] "http: TLS handshake error from 192.168.126.11:44638: no serving certificate available for the kubelet" Mar 18 14:04:11 crc kubenswrapper[4857]: I0318 14:04:11.789694 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" event={"ID":"af5933af-d25b-4d7a-8fda-e95c340a38ac","Type":"ContainerStarted","Data":"feca2abe0447ec5d520e7c2a9b60a1003095c525f50f3e0c1cf39e3cdb7b8f13"} Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.207221 4857 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qr84c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]log ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]etcd ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/max-in-flight-filter ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 14:04:12 crc kubenswrapper[4857]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-startinformers ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 14:04:12 crc kubenswrapper[4857]: livez check failed Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.207635 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" podUID="b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.356879 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:12 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.356987 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.384299 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.439347 4857 ???:1] "http: TLS handshake error from 192.168.126.11:44650: no serving certificate available for the kubelet" Mar 18 14:04:12 crc kubenswrapper[4857]: W0318 14:04:12.589630 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7db57b_a1ee_4fd5_b525_57c3b7eb8283.slice/crio-dae5e41b9e2cabe78af4140852a07126def56d09cef456ced526fc169154068a WatchSource:0}: Error finding container dae5e41b9e2cabe78af4140852a07126def56d09cef456ced526fc169154068a: Status 404 returned error can't find the container with id dae5e41b9e2cabe78af4140852a07126def56d09cef456ced526fc169154068a Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.592965 4857 patch_prober.go:28] interesting pod/console-f9d7485db-4bqqp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.593002 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bqqp" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.675768 4857 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qr84c container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]log ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]etcd ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/max-in-flight-filter ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 14:04:12 crc kubenswrapper[4857]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-startinformers ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 14:04:12 crc kubenswrapper[4857]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 14:04:12 crc kubenswrapper[4857]: livez check failed Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.675832 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" podUID="b0f43fc2-1ca2-4df7-9105-e8eaa0c9475d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.774747 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:04:12 crc kubenswrapper[4857]: W0318 14:04:12.805336 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1983ba6a_9da7_4d16_8135_1c928be5676b.slice/crio-e974a60ef5aee748172c8d2fa381e156df05ae8cbf65f6355b18707e0e51d6e7 WatchSource:0}: Error finding container e974a60ef5aee748172c8d2fa381e156df05ae8cbf65f6355b18707e0e51d6e7: Status 404 returned error can't find the container with id e974a60ef5aee748172c8d2fa381e156df05ae8cbf65f6355b18707e0e51d6e7 Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.806442 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.806725 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerStarted","Data":"dae5e41b9e2cabe78af4140852a07126def56d09cef456ced526fc169154068a"} Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.843857 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:04:12 crc kubenswrapper[4857]: W0318 14:04:12.848503 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77513906_1d0e_4d29_a4d3_d6cc71e023a8.slice/crio-2c3761f41c3564236d10217747b4c9ebfe510d0a7729ac65e4ed7a30536d33ce WatchSource:0}: Error finding container 2c3761f41c3564236d10217747b4c9ebfe510d0a7729ac65e4ed7a30536d33ce: Status 404 returned error can't find the container with id 2c3761f41c3564236d10217747b4c9ebfe510d0a7729ac65e4ed7a30536d33ce Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.884111 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.907734 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.921404 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.954877 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.954944 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.954999 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955291 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955399 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955869 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465"} pod="openshift-console/downloads-7954f5f757-gvkpz" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955953 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" containerID="cri-o://4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465" gracePeriod=2 Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955881 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:12 crc kubenswrapper[4857]: I0318 14:04:12.955999 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.051962 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.066538 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.072596 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.084016 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:04:13 crc kubenswrapper[4857]: W0318 14:04:13.115437 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc4fecd50_1411_4810_b876_5ee31af001cb.slice/crio-07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0 WatchSource:0}: Error finding container 07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0: Status 404 returned error can't find the container with id 07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.179899 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" path="/var/lib/kubelet/pods/1a8c5344-76bd-4d55-aab5-d1a100a5c08c/volumes" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.183382 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" path="/var/lib/kubelet/pods/867f36a7-afd9-4d67-a7d3-42f2ca67ac91/volumes" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.315764 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:13 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:13 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:13 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.315839 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.538715 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:13 crc kubenswrapper[4857]: E0318 14:04:13.539190 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539256 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: E0318 14:04:13.539282 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" containerName="collect-profiles" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539291 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" containerName="collect-profiles" Mar 18 14:04:13 crc kubenswrapper[4857]: E0318 14:04:13.539314 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerName="route-controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539322 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerName="route-controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539630 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="867f36a7-afd9-4d67-a7d3-42f2ca67ac91" containerName="route-controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539665 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8c5344-76bd-4d55-aab5-d1a100a5c08c" containerName="controller-manager" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.539679 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" containerName="collect-profiles" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.540465 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.540656 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.542267 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.544927 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.545321 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.545445 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.545647 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.546118 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.546234 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.546569 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.546691 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.547375 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.559649 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.560183 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.560530 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.613500 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.622146 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628528 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628581 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnxq\" (UniqueName: \"kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628611 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ntc\" (UniqueName: \"kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628656 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628720 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628768 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628803 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628821 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.628867 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.638358 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.720345 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.729908 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.730183 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.730576 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnxq\" (UniqueName: \"kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.730690 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ntc\" (UniqueName: \"kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.731053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.731249 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.731342 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.731438 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.731470 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.737943 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.740287 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.740567 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.933149 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.934225 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.936095 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.937263 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.953488 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnxq\" (UniqueName: \"kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq\") pod \"route-controller-manager-67c5d9cb48-b5pfc\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.957238 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ntc\" (UniqueName: \"kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc\") pod \"controller-manager-7b58c5854-t2tjx\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.958918 4857 generic.go:334] "Generic (PLEG): container finished" podID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerID="e19b88929a72f0b151ff3c9408e20237c0608079b0d351ce94ded6dd7062dacb" exitCode=0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.959827 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerDied","Data":"e19b88929a72f0b151ff3c9408e20237c0608079b0d351ce94ded6dd7062dacb"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.959882 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerStarted","Data":"2c3761f41c3564236d10217747b4c9ebfe510d0a7729ac65e4ed7a30536d33ce"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.964792 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae","Type":"ContainerStarted","Data":"bc2b060a1d574bc0dcb0a5892fb2fc0efb2d84f3784df12ce65578db454f9abc"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.964844 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae","Type":"ContainerStarted","Data":"26b5303be0a51f4f2f19012313570443432248e3f5cba4651058d63fc074de0b"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.970430 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" event={"ID":"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3","Type":"ContainerStarted","Data":"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.970513 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" event={"ID":"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3","Type":"ContainerStarted","Data":"61b53ea0ede3cc5b82c0fb0fb28784664d96b9e834ef34ffd6f260589f85b7cc"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.971030 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.973126 4857 generic.go:334] "Generic (PLEG): container finished" podID="ef638f17-5999-467e-b170-8ef20068e451" containerID="4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465" exitCode=0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.973175 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerDied","Data":"4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.973238 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerStarted","Data":"9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.973726 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.974495 4857 generic.go:334] "Generic (PLEG): container finished" podID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerID="4fc80e37d1feaf06e64f2acd4d7dd9d2c29b6a8eb3e7eb7d5545333c442ec6c1" exitCode=0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.974568 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerDied","Data":"4fc80e37d1feaf06e64f2acd4d7dd9d2c29b6a8eb3e7eb7d5545333c442ec6c1"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.974613 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerStarted","Data":"a480e4857b49ee475b6a80df2655d558a1f5ee249b65a372eee2a2d64d9e4c36"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.979556 4857 generic.go:334] "Generic (PLEG): container finished" podID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerID="fd37efbb538c041876a93ee5f2163b7e9db5ff2c56f85a394859c2d513d04024" exitCode=0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.979965 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerDied","Data":"fd37efbb538c041876a93ee5f2163b7e9db5ff2c56f85a394859c2d513d04024"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.980025 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerStarted","Data":"e974a60ef5aee748172c8d2fa381e156df05ae8cbf65f6355b18707e0e51d6e7"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.982632 4857 generic.go:334] "Generic (PLEG): container finished" podID="a7272920-8e13-4414-8a32-dfea84d2460f" containerID="d33696b0fcdac1a6e2c56ee85a1bcabad1fe3c0e82f8ddd64b7318c7e1de7793" exitCode=0 Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.982694 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerDied","Data":"d33696b0fcdac1a6e2c56ee85a1bcabad1fe3c0e82f8ddd64b7318c7e1de7793"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.982721 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerStarted","Data":"331598e380e48fc28bf571a4d5c6608ee3ca32e646c707c85f04e95232253156"} Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.983054 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:13 crc kubenswrapper[4857]: I0318 14:04:13.983119 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.024036 4857 generic.go:334] "Generic (PLEG): container finished" podID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerID="c68420996ef7af7e9b1a79f72cc65ecb36965ffe0514886d2ee871adf44df785" exitCode=0 Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.024108 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerDied","Data":"c68420996ef7af7e9b1a79f72cc65ecb36965ffe0514886d2ee871adf44df785"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.024133 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerStarted","Data":"12b9161d37acd741965229c239c7726c0a8659983d3c2dc38a28981831cc06f3"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.033464 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" podStartSLOduration=188.033415486 podStartE2EDuration="3m8.033415486s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:14.028737048 +0000 UTC m=+238.157865505" watchObservedRunningTime="2026-03-18 14:04:14.033415486 +0000 UTC m=+238.162543943" Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.035149 4857 generic.go:334] "Generic (PLEG): container finished" podID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerID="1313a1b817c3a0ca16c4ff79007b4c8eea00534fe3fe39e9f9734d1469c87110" exitCode=0 Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.035213 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerDied","Data":"1313a1b817c3a0ca16c4ff79007b4c8eea00534fe3fe39e9f9734d1469c87110"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.035237 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerStarted","Data":"618c731494aab00a22f93bbd2fdbb8b746f2dbd36bd4b17e03e8f0b3d7add7e3"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.043886 4857 generic.go:334] "Generic (PLEG): container finished" podID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerID="7dce3cda667cceaccdac133e0339c8101f877d800a628cf73c362ea593b143c1" exitCode=0 Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.043949 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerDied","Data":"7dce3cda667cceaccdac133e0339c8101f877d800a628cf73c362ea593b143c1"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.043975 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerStarted","Data":"e6e8626a3e7725bc57f2cf89f82ce6c8ab6d00ef09939adbbea6eb834d8fec59"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.052178 4857 generic.go:334] "Generic (PLEG): container finished" podID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerID="014e065bf5b236c2e0825caad82e605580730b831707b051cad6ecebca748eb6" exitCode=0 Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.052635 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerDied","Data":"014e065bf5b236c2e0825caad82e605580730b831707b051cad6ecebca748eb6"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.077900 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4fecd50-1411-4810-b876-5ee31af001cb","Type":"ContainerStarted","Data":"07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0"} Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.138223 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.148394 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=8.148367333 podStartE2EDuration="8.148367333s" podCreationTimestamp="2026-03-18 14:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:14.143528391 +0000 UTC m=+238.272656848" watchObservedRunningTime="2026-03-18 14:04:14.148367333 +0000 UTC m=+238.277495790" Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.157230 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.429917 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:14 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:14 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:14 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:14 crc kubenswrapper[4857]: I0318 14:04:14.430049 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.333191 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:15 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:15 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:15 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.333822 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.355441 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.355522 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.368572 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4fecd50-1411-4810-b876-5ee31af001cb","Type":"ContainerStarted","Data":"98cf6eb9430260484c4e1389255654872fd32e2a4a8d69a0c2966426ebb6bed4"} Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.368660 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.432936 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=11.426391006 podStartE2EDuration="11.426391006s" podCreationTimestamp="2026-03-18 14:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:15.417545773 +0000 UTC m=+239.546674240" watchObservedRunningTime="2026-03-18 14:04:15.426391006 +0000 UTC m=+239.555519463" Mar 18 14:04:15 crc kubenswrapper[4857]: W0318 14:04:15.512262 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a829888_34a0_4f4e_8266_fa8c643429b1.slice/crio-1446fc6ad478cab74f8d27070534c415daa7096eb0b29c5d701baca8fe49c2d5 WatchSource:0}: Error finding container 1446fc6ad478cab74f8d27070534c415daa7096eb0b29c5d701baca8fe49c2d5: Status 404 returned error can't find the container with id 1446fc6ad478cab74f8d27070534c415daa7096eb0b29c5d701baca8fe49c2d5 Mar 18 14:04:15 crc kubenswrapper[4857]: I0318 14:04:15.735759 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:16 crc kubenswrapper[4857]: I0318 14:04:16.321404 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:16 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:16 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:16 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:16 crc kubenswrapper[4857]: I0318 14:04:16.321479 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:16 crc kubenswrapper[4857]: I0318 14:04:16.523198 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" event={"ID":"107ef63e-5846-438f-97b0-a57eaeab57a7","Type":"ContainerStarted","Data":"de99ca6742c203976ebe068bfce2951c1a84d278c3399845a6a1fd40c6e712f9"} Mar 18 14:04:16 crc kubenswrapper[4857]: I0318 14:04:16.543259 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" event={"ID":"7a829888-34a0-4f4e-8266-fa8c643429b1","Type":"ContainerStarted","Data":"1446fc6ad478cab74f8d27070534c415daa7096eb0b29c5d701baca8fe49c2d5"} Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.400521 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:17 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:17 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:17 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.400811 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.609786 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" event={"ID":"7a829888-34a0-4f4e-8266-fa8c643429b1","Type":"ContainerStarted","Data":"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293"} Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.611017 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.616424 4857 generic.go:334] "Generic (PLEG): container finished" podID="c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" containerID="bc2b060a1d574bc0dcb0a5892fb2fc0efb2d84f3784df12ce65578db454f9abc" exitCode=0 Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.616510 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae","Type":"ContainerDied","Data":"bc2b060a1d574bc0dcb0a5892fb2fc0efb2d84f3784df12ce65578db454f9abc"} Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.622096 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" event={"ID":"107ef63e-5846-438f-97b0-a57eaeab57a7","Type":"ContainerStarted","Data":"7a181793d9300a2223f52d4ba08578de88bb6dc7edd8acd49105aaba25dd5c3b"} Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.623406 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.718193 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.733218 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.735292 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qr84c" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.880815 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" podStartSLOduration=12.880789766 podStartE2EDuration="12.880789766s" podCreationTimestamp="2026-03-18 14:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:17.647847448 +0000 UTC m=+241.776975925" watchObservedRunningTime="2026-03-18 14:04:17.880789766 +0000 UTC m=+242.009918223" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.917872 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:17 crc kubenswrapper[4857]: I0318 14:04:17.927683 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" podStartSLOduration=12.92765298 podStartE2EDuration="12.92765298s" podCreationTimestamp="2026-03-18 14:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:04:17.882523524 +0000 UTC m=+242.011651981" watchObservedRunningTime="2026-03-18 14:04:17.92765298 +0000 UTC m=+242.056781437" Mar 18 14:04:18 crc kubenswrapper[4857]: I0318 14:04:18.394711 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:18 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:18 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:18 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:18 crc kubenswrapper[4857]: I0318 14:04:18.394813 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:18 crc kubenswrapper[4857]: I0318 14:04:18.697333 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:18 crc kubenswrapper[4857]: I0318 14:04:18.731269 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.479767 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:19 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:19 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:19 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.479922 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.835349 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4fecd50-1411-4810-b876-5ee31af001cb" containerID="98cf6eb9430260484c4e1389255654872fd32e2a4a8d69a0c2966426ebb6bed4" exitCode=0 Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.836195 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4fecd50-1411-4810-b876-5ee31af001cb","Type":"ContainerDied","Data":"98cf6eb9430260484c4e1389255654872fd32e2a4a8d69a0c2966426ebb6bed4"} Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.921866 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.937680 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir\") pod \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.937858 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access\") pod \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\" (UID: \"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae\") " Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.939371 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" (UID: "c13133d0-7ecb-43ee-9087-4b3fed7fa6ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:04:19 crc kubenswrapper[4857]: I0318 14:04:19.961419 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" (UID: "c13133d0-7ecb-43ee-9087-4b3fed7fa6ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.040951 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.040993 4857 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c13133d0-7ecb-43ee-9087-4b3fed7fa6ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.317948 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:20 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:20 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:20 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.318026 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.869362 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" podUID="107ef63e-5846-438f-97b0-a57eaeab57a7" containerName="route-controller-manager" containerID="cri-o://7a181793d9300a2223f52d4ba08578de88bb6dc7edd8acd49105aaba25dd5c3b" gracePeriod=30 Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.870092 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.902465 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c13133d0-7ecb-43ee-9087-4b3fed7fa6ae","Type":"ContainerDied","Data":"26b5303be0a51f4f2f19012313570443432248e3f5cba4651058d63fc074de0b"} Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.902566 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b5303be0a51f4f2f19012313570443432248e3f5cba4651058d63fc074de0b" Mar 18 14:04:20 crc kubenswrapper[4857]: I0318 14:04:20.902990 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" podUID="7a829888-34a0-4f4e-8266-fa8c643429b1" containerName="controller-manager" containerID="cri-o://d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293" gracePeriod=30 Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.331979 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:21 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:21 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:21 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.332272 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.413648 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.441253 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.456428 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access\") pod \"c4fecd50-1411-4810-b876-5ee31af001cb\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.457655 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles\") pod \"7a829888-34a0-4f4e-8266-fa8c643429b1\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.457720 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca\") pod \"7a829888-34a0-4f4e-8266-fa8c643429b1\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.457819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config\") pod \"7a829888-34a0-4f4e-8266-fa8c643429b1\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.457949 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert\") pod \"7a829888-34a0-4f4e-8266-fa8c643429b1\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.458005 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8ntc\" (UniqueName: \"kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc\") pod \"7a829888-34a0-4f4e-8266-fa8c643429b1\" (UID: \"7a829888-34a0-4f4e-8266-fa8c643429b1\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.458061 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir\") pod \"c4fecd50-1411-4810-b876-5ee31af001cb\" (UID: \"c4fecd50-1411-4810-b876-5ee31af001cb\") " Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.458364 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c4fecd50-1411-4810-b876-5ee31af001cb" (UID: "c4fecd50-1411-4810-b876-5ee31af001cb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.459220 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a829888-34a0-4f4e-8266-fa8c643429b1" (UID: "7a829888-34a0-4f4e-8266-fa8c643429b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.460970 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a829888-34a0-4f4e-8266-fa8c643429b1" (UID: "7a829888-34a0-4f4e-8266-fa8c643429b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.461894 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config" (OuterVolumeSpecName: "config") pod "7a829888-34a0-4f4e-8266-fa8c643429b1" (UID: "7a829888-34a0-4f4e-8266-fa8c643429b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.462551 4857 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4fecd50-1411-4810-b876-5ee31af001cb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.462596 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.462620 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.462651 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a829888-34a0-4f4e-8266-fa8c643429b1-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.599669 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c4fecd50-1411-4810-b876-5ee31af001cb" (UID: "c4fecd50-1411-4810-b876-5ee31af001cb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.600705 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc" (OuterVolumeSpecName: "kube-api-access-n8ntc") pod "7a829888-34a0-4f4e-8266-fa8c643429b1" (UID: "7a829888-34a0-4f4e-8266-fa8c643429b1"). InnerVolumeSpecName "kube-api-access-n8ntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.603001 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a829888-34a0-4f4e-8266-fa8c643429b1" (UID: "7a829888-34a0-4f4e-8266-fa8c643429b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.674652 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a829888-34a0-4f4e-8266-fa8c643429b1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.674685 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8ntc\" (UniqueName: \"kubernetes.io/projected/7a829888-34a0-4f4e-8266-fa8c643429b1-kube-api-access-n8ntc\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.674696 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4fecd50-1411-4810-b876-5ee31af001cb-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.913495 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c4fecd50-1411-4810-b876-5ee31af001cb","Type":"ContainerDied","Data":"07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0"} Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.913549 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07748a6d84b1a49e5773c40cdda840dbc68ec6f5515439305554c44e5824f3f0" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.913653 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.921115 4857 generic.go:334] "Generic (PLEG): container finished" podID="107ef63e-5846-438f-97b0-a57eaeab57a7" containerID="7a181793d9300a2223f52d4ba08578de88bb6dc7edd8acd49105aaba25dd5c3b" exitCode=0 Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.921214 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" event={"ID":"107ef63e-5846-438f-97b0-a57eaeab57a7","Type":"ContainerDied","Data":"7a181793d9300a2223f52d4ba08578de88bb6dc7edd8acd49105aaba25dd5c3b"} Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.926012 4857 generic.go:334] "Generic (PLEG): container finished" podID="7a829888-34a0-4f4e-8266-fa8c643429b1" containerID="d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293" exitCode=0 Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.926059 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" event={"ID":"7a829888-34a0-4f4e-8266-fa8c643429b1","Type":"ContainerDied","Data":"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293"} Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.926136 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" event={"ID":"7a829888-34a0-4f4e-8266-fa8c643429b1","Type":"ContainerDied","Data":"1446fc6ad478cab74f8d27070534c415daa7096eb0b29c5d701baca8fe49c2d5"} Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.926203 4857 scope.go:117] "RemoveContainer" containerID="d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293" Mar 18 14:04:21 crc kubenswrapper[4857]: I0318 14:04:21.926222 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b58c5854-t2tjx" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.034293 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.041411 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b58c5854-t2tjx"] Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.071337 4857 scope.go:117] "RemoveContainer" containerID="d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293" Mar 18 14:04:22 crc kubenswrapper[4857]: E0318 14:04:22.137733 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293\": container with ID starting with d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293 not found: ID does not exist" containerID="d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.137825 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293"} err="failed to get container status \"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293\": rpc error: code = NotFound desc = could not find container \"d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293\": container with ID starting with d9cab476a13fba5dfcec322ddb414f34cc568dd9af88f9572cf337c14a63a293 not found: ID does not exist" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.219425 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.429306 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert\") pod \"107ef63e-5846-438f-97b0-a57eaeab57a7\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.429413 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config\") pod \"107ef63e-5846-438f-97b0-a57eaeab57a7\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.429502 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwnxq\" (UniqueName: \"kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq\") pod \"107ef63e-5846-438f-97b0-a57eaeab57a7\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.429552 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca\") pod \"107ef63e-5846-438f-97b0-a57eaeab57a7\" (UID: \"107ef63e-5846-438f-97b0-a57eaeab57a7\") " Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.431572 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "107ef63e-5846-438f-97b0-a57eaeab57a7" (UID: "107ef63e-5846-438f-97b0-a57eaeab57a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.431930 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:22 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:22 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:22 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.432017 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.432945 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config" (OuterVolumeSpecName: "config") pod "107ef63e-5846-438f-97b0-a57eaeab57a7" (UID: "107ef63e-5846-438f-97b0-a57eaeab57a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.439357 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq" (OuterVolumeSpecName: "kube-api-access-gwnxq") pod "107ef63e-5846-438f-97b0-a57eaeab57a7" (UID: "107ef63e-5846-438f-97b0-a57eaeab57a7"). InnerVolumeSpecName "kube-api-access-gwnxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.439335 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "107ef63e-5846-438f-97b0-a57eaeab57a7" (UID: "107ef63e-5846-438f-97b0-a57eaeab57a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.531794 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.531854 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwnxq\" (UniqueName: \"kubernetes.io/projected/107ef63e-5846-438f-97b0-a57eaeab57a7-kube-api-access-gwnxq\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.531865 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/107ef63e-5846-438f-97b0-a57eaeab57a7-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.531873 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/107ef63e-5846-438f-97b0-a57eaeab57a7-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.611277 4857 patch_prober.go:28] interesting pod/console-f9d7485db-4bqqp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.611429 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4bqqp" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.859790 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:04:22 crc kubenswrapper[4857]: E0318 14:04:22.860803 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.860839 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: E0318 14:04:22.860883 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fecd50-1411-4810-b876-5ee31af001cb" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.860892 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fecd50-1411-4810-b876-5ee31af001cb" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: E0318 14:04:22.860904 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a829888-34a0-4f4e-8266-fa8c643429b1" containerName="controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.860912 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a829888-34a0-4f4e-8266-fa8c643429b1" containerName="controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: E0318 14:04:22.860921 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107ef63e-5846-438f-97b0-a57eaeab57a7" containerName="route-controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.860927 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="107ef63e-5846-438f-97b0-a57eaeab57a7" containerName="route-controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.861114 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fecd50-1411-4810-b876-5ee31af001cb" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.861136 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="107ef63e-5846-438f-97b0-a57eaeab57a7" containerName="route-controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.861149 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a829888-34a0-4f4e-8266-fa8c643429b1" containerName="controller-manager" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.862830 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13133d0-7ecb-43ee-9087-4b3fed7fa6ae" containerName="pruner" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.863600 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.866001 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.866378 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.866641 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.867126 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.867316 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.868294 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.872170 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.872318 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.873359 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.875719 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.882114 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.958283 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" event={"ID":"107ef63e-5846-438f-97b0-a57eaeab57a7","Type":"ContainerDied","Data":"de99ca6742c203976ebe068bfce2951c1a84d278c3399845a6a1fd40c6e712f9"} Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.958314 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc" Mar 18 14:04:22 crc kubenswrapper[4857]: I0318 14:04:22.958344 4857 scope.go:117] "RemoveContainer" containerID="7a181793d9300a2223f52d4ba08578de88bb6dc7edd8acd49105aaba25dd5c3b" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025242 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025336 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025436 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025471 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025594 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025689 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025740 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nxt\" (UniqueName: \"kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025788 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn57t\" (UniqueName: \"kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.025818 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.057371 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.057431 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.058348 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.058378 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.074305 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.077521 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c5d9cb48-b5pfc"] Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127252 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127329 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127399 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127475 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127533 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127585 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nxt\" (UniqueName: \"kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127614 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn57t\" (UniqueName: \"kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.127653 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.128587 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.129076 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.129212 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.129719 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.130125 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.134596 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.144067 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.184708 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107ef63e-5846-438f-97b0-a57eaeab57a7" path="/var/lib/kubelet/pods/107ef63e-5846-438f-97b0-a57eaeab57a7/volumes" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.185788 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn57t\" (UniqueName: \"kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t\") pod \"controller-manager-6d565f68c9-5cqcw\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.198814 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a829888-34a0-4f4e-8266-fa8c643429b1" path="/var/lib/kubelet/pods/7a829888-34a0-4f4e-8266-fa8c643429b1/volumes" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.320496 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:23 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:23 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:23 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.320585 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.323689 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nxt\" (UniqueName: \"kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt\") pod \"route-controller-manager-6475fd654c-7vpnr\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.497580 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:04:23 crc kubenswrapper[4857]: I0318 14:04:23.510717 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:04:24 crc kubenswrapper[4857]: I0318 14:04:24.316621 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:24 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:24 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:24 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:24 crc kubenswrapper[4857]: I0318 14:04:24.316706 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:25 crc kubenswrapper[4857]: I0318 14:04:25.314223 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:25 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:25 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:25 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:25 crc kubenswrapper[4857]: I0318 14:04:25.314327 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:26 crc kubenswrapper[4857]: I0318 14:04:26.442737 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:26 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:26 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:26 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:26 crc kubenswrapper[4857]: I0318 14:04:26.443584 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:26 crc kubenswrapper[4857]: I0318 14:04:26.544073 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:04:26 crc kubenswrapper[4857]: I0318 14:04:26.563176 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:04:27 crc kubenswrapper[4857]: I0318 14:04:27.038360 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:04:27 crc kubenswrapper[4857]: I0318 14:04:27.038442 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:04:27 crc kubenswrapper[4857]: I0318 14:04:27.362064 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:27 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:27 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:27 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:27 crc kubenswrapper[4857]: I0318 14:04:27.362130 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:28 crc kubenswrapper[4857]: I0318 14:04:28.313904 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:28 crc kubenswrapper[4857]: [-]has-synced failed: reason withheld Mar 18 14:04:28 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:28 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:28 crc kubenswrapper[4857]: I0318 14:04:28.313950 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:29 crc kubenswrapper[4857]: I0318 14:04:29.329511 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 14:04:29 crc kubenswrapper[4857]: [+]has-synced ok Mar 18 14:04:29 crc kubenswrapper[4857]: [+]process-running ok Mar 18 14:04:29 crc kubenswrapper[4857]: healthz check failed Mar 18 14:04:29 crc kubenswrapper[4857]: I0318 14:04:29.330015 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:04:30 crc kubenswrapper[4857]: I0318 14:04:30.314220 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:04:30 crc kubenswrapper[4857]: I0318 14:04:30.317096 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.634104 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.643222 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.955136 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.955247 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.955918 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:32 crc kubenswrapper[4857]: I0318 14:04:32.955940 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:33 crc kubenswrapper[4857]: I0318 14:04:33.761213 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 14:04:37 crc kubenswrapper[4857]: I0318 14:04:37.071473 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.408491 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.420504 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.421081 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.426385 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.427048 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.507060 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.507359 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.595345 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.609831 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.609898 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.610706 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.639061 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:38 crc kubenswrapper[4857]: E0318 14:04:38.683099 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest sha256:e4bd35b83c0fba6d225bfa8f356a8e5df013653884a4233d5a7c4e3b5d503bae in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 18 14:04:38 crc kubenswrapper[4857]: E0318 14:04:38.683900 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qggc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dz4vq_openshift-marketplace(1983ba6a-9da7-4d16-8135-1c928be5676b): ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest sha256:e4bd35b83c0fba6d225bfa8f356a8e5df013653884a4233d5a7c4e3b5d503bae in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Mar 18 14:04:38 crc kubenswrapper[4857]: E0318 14:04:38.685173 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/redhat/community-operator-index:v4.18: reading manifest sha256:e4bd35b83c0fba6d225bfa8f356a8e5df013653884a4233d5a7c4e3b5d503bae in registry.redhat.io/redhat/community-operator-index: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.702551 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:04:38 crc kubenswrapper[4857]: I0318 14:04:38.940940 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.953256 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.953606 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.953667 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.953256 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.954025 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.954278 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.954341 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.955386 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed"} pod="openshift-console/downloads-7954f5f757-gvkpz" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 18 14:04:42 crc kubenswrapper[4857]: I0318 14:04:42.955480 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" containerID="cri-o://9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed" gracePeriod=2 Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.703451 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.704498 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.713971 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.890672 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.890792 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.890878 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.991890 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.992008 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.992028 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.992082 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:43 crc kubenswrapper[4857]: I0318 14:04:43.992199 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:44 crc kubenswrapper[4857]: I0318 14:04:44.012700 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access\") pod \"installer-9-crc\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:44 crc kubenswrapper[4857]: I0318 14:04:44.065041 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:04:44 crc kubenswrapper[4857]: I0318 14:04:44.447261 4857 generic.go:334] "Generic (PLEG): container finished" podID="ef638f17-5999-467e-b170-8ef20068e451" containerID="9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed" exitCode=0 Mar 18 14:04:44 crc kubenswrapper[4857]: I0318 14:04:44.447313 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerDied","Data":"9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed"} Mar 18 14:04:44 crc kubenswrapper[4857]: I0318 14:04:44.447393 4857 scope.go:117] "RemoveContainer" containerID="4650e168058bcbaf8c4a1f80fa167ff69b20bfbe6544eb13f0bbf51333ca9465" Mar 18 14:04:52 crc kubenswrapper[4857]: I0318 14:04:52.953633 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:04:52 crc kubenswrapper[4857]: I0318 14:04:52.954348 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:04:53 crc kubenswrapper[4857]: I0318 14:04:53.428048 4857 ???:1] "http: TLS handshake error from 192.168.126.11:45706: no serving certificate available for the kubelet" Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.038982 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.039380 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.039476 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.040272 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.040385 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839" gracePeriod=600 Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.912869 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839" exitCode=0 Mar 18 14:04:57 crc kubenswrapper[4857]: I0318 14:04:57.913180 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839"} Mar 18 14:04:58 crc kubenswrapper[4857]: W0318 14:04:58.746791 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba0e89f9_1992_44b3_ae25_a7d4a939f188.slice/crio-e00e42a40fe7b95d24bec4c4e2672221dff7c2022146d71ed9cf36d196a873b5 WatchSource:0}: Error finding container e00e42a40fe7b95d24bec4c4e2672221dff7c2022146d71ed9cf36d196a873b5: Status 404 returned error can't find the container with id e00e42a40fe7b95d24bec4c4e2672221dff7c2022146d71ed9cf36d196a873b5 Mar 18 14:04:58 crc kubenswrapper[4857]: W0318 14:04:58.749895 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eea8f50_204c_4712_b8a9_206d2c67aa43.slice/crio-f62e5a3b11d150d108a0b27fb518081a0a07e8f4a45ae34eb3c1c5ae43d544f1 WatchSource:0}: Error finding container f62e5a3b11d150d108a0b27fb518081a0a07e8f4a45ae34eb3c1c5ae43d544f1: Status 404 returned error can't find the container with id f62e5a3b11d150d108a0b27fb518081a0a07e8f4a45ae34eb3c1c5ae43d544f1 Mar 18 14:04:58 crc kubenswrapper[4857]: I0318 14:04:58.920368 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" event={"ID":"ba0e89f9-1992-44b3-ae25-a7d4a939f188","Type":"ContainerStarted","Data":"e00e42a40fe7b95d24bec4c4e2672221dff7c2022146d71ed9cf36d196a873b5"} Mar 18 14:04:58 crc kubenswrapper[4857]: I0318 14:04:58.921543 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" event={"ID":"4eea8f50-204c-4712-b8a9-206d2c67aa43","Type":"ContainerStarted","Data":"f62e5a3b11d150d108a0b27fb518081a0a07e8f4a45ae34eb3c1c5ae43d544f1"} Mar 18 14:05:02 crc kubenswrapper[4857]: I0318 14:05:02.953598 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:02 crc kubenswrapper[4857]: I0318 14:05:02.954010 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:06 crc kubenswrapper[4857]: I0318 14:05:06.823638 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:05:12 crc kubenswrapper[4857]: I0318 14:05:12.954186 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:12 crc kubenswrapper[4857]: I0318 14:05:12.954797 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:13 crc kubenswrapper[4857]: E0318 14:05:13.922799 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 18 14:05:13 crc kubenswrapper[4857]: E0318 14:05:13.923357 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:05:13 crc kubenswrapper[4857]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 18 14:05:13 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmzp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29564042-j5cmc_openshift-infra(287df787-86a7-4a56-b5a1-fb55b6bed91b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 18 14:05:13 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:05:13 crc kubenswrapper[4857]: E0318 14:05:13.924573 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" Mar 18 14:05:14 crc kubenswrapper[4857]: E0318 14:05:14.047823 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 18 14:05:14 crc kubenswrapper[4857]: E0318 14:05:14.047977 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:05:14 crc kubenswrapper[4857]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 18 14:05:14 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ch7hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29564044-5r7zc_openshift-infra(af5933af-d25b-4d7a-8fda-e95c340a38ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 18 14:05:14 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:05:14 crc kubenswrapper[4857]: E0318 14:05:14.049328 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" Mar 18 14:05:14 crc kubenswrapper[4857]: E0318 14:05:14.632101 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" Mar 18 14:05:14 crc kubenswrapper[4857]: E0318 14:05:14.632134 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" Mar 18 14:05:20 crc kubenswrapper[4857]: E0318 14:05:20.487444 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 18 14:05:20 crc kubenswrapper[4857]: E0318 14:05:20.489319 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b585n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2g48f_openshift-marketplace(9c2eafeb-c191-4d62-ab06-2085407e44e5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:20 crc kubenswrapper[4857]: E0318 14:05:20.490991 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2g48f" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" Mar 18 14:05:22 crc kubenswrapper[4857]: E0318 14:05:22.097445 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2g48f" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" Mar 18 14:05:22 crc kubenswrapper[4857]: E0318 14:05:22.176405 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 18 14:05:22 crc kubenswrapper[4857]: E0318 14:05:22.176660 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kb4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hzfl4_openshift-marketplace(37ef0e05-d551-4cd1-9399-be898e6a5c85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:22 crc kubenswrapper[4857]: E0318 14:05:22.178393 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" Mar 18 14:05:23 crc kubenswrapper[4857]: I0318 14:05:23.227015 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:23 crc kubenswrapper[4857]: I0318 14:05:23.227117 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:24 crc kubenswrapper[4857]: I0318 14:05:24.473939 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:24 crc kubenswrapper[4857]: I0318 14:05:24.475333 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.448858 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.515618 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.515889 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrs5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cghkz_openshift-marketplace(9b7db57b-a1ee-4fd5-b525-57c3b7eb8283): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.517553 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.558491 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.558749 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rm8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lmqk2_openshift-marketplace(a7272920-8e13-4414-8a32-dfea84d2460f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.560005 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lmqk2" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.566917 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.567129 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jht5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q8pg8_openshift-marketplace(77513906-1d0e-4d29-a4d3-d6cc71e023a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:26 crc kubenswrapper[4857]: E0318 14:05:26.568343 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q8pg8" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.860415 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lmqk2" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.860498 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q8pg8" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.860676 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.917156 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.917544 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qggc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dz4vq_openshift-marketplace(1983ba6a-9da7-4d16-8135-1c928be5676b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.918825 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.921193 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.921654 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z842t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l9sbh_openshift-marketplace(510c03dc-bd76-40f3-abee-55e80cc97ddb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.922967 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l9sbh" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.959953 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.960152 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb9rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c89xj_openshift-marketplace(f911e035-9c03-4a95-8136-db8bd4e63e9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:05:27 crc kubenswrapper[4857]: E0318 14:05:27.961364 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c89xj" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.212880 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 18 14:05:28 crc kubenswrapper[4857]: W0318 14:05:28.225290 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod629717da_142d_436b_bb10_642182966fd8.slice/crio-d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498 WatchSource:0}: Error finding container d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498: Status 404 returned error can't find the container with id d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498 Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.340665 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.545619 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" event={"ID":"af5933af-d25b-4d7a-8fda-e95c340a38ac","Type":"ContainerStarted","Data":"4b364a8a1f55996f928000ab80476797f1d600e64460f3a63565d9f73b95965b"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.553013 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.554741 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" event={"ID":"4eea8f50-204c-4712-b8a9-206d2c67aa43","Type":"ContainerStarted","Data":"827a5e2cf5b4973155f6d1e4f7fbd6b043b6c47cd3235b6ac9b75f8d58e1156e"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.555986 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.558021 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" event={"ID":"ba0e89f9-1992-44b3-ae25-a7d4a939f188","Type":"ContainerStarted","Data":"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.558191 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" podUID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" containerName="controller-manager" containerID="cri-o://5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d" gracePeriod=30 Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.558689 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.559404 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerName="route-controller-manager" containerID="cri-o://827a5e2cf5b4973155f6d1e4f7fbd6b043b6c47cd3235b6ac9b75f8d58e1156e" gracePeriod=30 Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.564009 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6c25f42-fe96-46ed-999f-608fed536177","Type":"ContainerStarted","Data":"f0d67d1f36443ddc94de7637d28ab02a97527d5278795b405dafaacf65dc18ac"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.570165 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.573364 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerStarted","Data":"995a79c7a78c4cbfb65584c07d9dbbbed9d22ddc43ab2c793e6cc11dd2a7edc8"} Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.573887 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.574019 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.574061 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.576599 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"629717da-142d-436b-bb10-642182966fd8","Type":"ContainerStarted","Data":"d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498"} Mar 18 14:05:28 crc kubenswrapper[4857]: E0318 14:05:28.582275 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l9sbh" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" Mar 18 14:05:28 crc kubenswrapper[4857]: E0318 14:05:28.582359 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" Mar 18 14:05:28 crc kubenswrapper[4857]: E0318 14:05:28.582410 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c89xj" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.652971 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" podStartSLOduration=70.652927415 podStartE2EDuration="1m10.652927415s" podCreationTimestamp="2026-03-18 14:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:28.651508446 +0000 UTC m=+312.780636903" watchObservedRunningTime="2026-03-18 14:05:28.652927415 +0000 UTC m=+312.782055872" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.656228 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" podStartSLOduration=12.255352905 podStartE2EDuration="1m28.656204576s" podCreationTimestamp="2026-03-18 14:04:00 +0000 UTC" firstStartedPulling="2026-03-18 14:04:11.603298839 +0000 UTC m=+235.732427296" lastFinishedPulling="2026-03-18 14:05:28.00415051 +0000 UTC m=+312.133278967" observedRunningTime="2026-03-18 14:05:28.570226478 +0000 UTC m=+312.699354935" watchObservedRunningTime="2026-03-18 14:05:28.656204576 +0000 UTC m=+312.785333053" Mar 18 14:05:28 crc kubenswrapper[4857]: I0318 14:05:28.687511 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" podStartSLOduration=70.687485787 podStartE2EDuration="1m10.687485787s" podCreationTimestamp="2026-03-18 14:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:28.68285941 +0000 UTC m=+312.811987867" watchObservedRunningTime="2026-03-18 14:05:28.687485787 +0000 UTC m=+312.816614234" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.015431 4857 patch_prober.go:28] interesting pod/route-controller-manager-6475fd654c-7vpnr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:42408->10.217.0.60:8443: read: connection reset by peer" start-of-body= Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.015808 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": read tcp 10.217.0.2:42408->10.217.0.60:8443: read: connection reset by peer" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.337176 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.380538 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:29 crc kubenswrapper[4857]: E0318 14:05:29.380933 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" containerName="controller-manager" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.380977 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" containerName="controller-manager" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.381189 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" containerName="controller-manager" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.381907 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.397995 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.465986 4857 csr.go:261] certificate signing request csr-hhtsr is approved, waiting to be issued Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.474363 4857 csr.go:257] certificate signing request csr-hhtsr is issued Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.477628 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn57t\" (UniqueName: \"kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t\") pod \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.477732 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert\") pod \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.477839 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca\") pod \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.477934 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config\") pod \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.477993 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles\") pod \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\" (UID: \"ba0e89f9-1992-44b3-ae25-a7d4a939f188\") " Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.478708 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ba0e89f9-1992-44b3-ae25-a7d4a939f188" (UID: "ba0e89f9-1992-44b3-ae25-a7d4a939f188"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.478828 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config" (OuterVolumeSpecName: "config") pod "ba0e89f9-1992-44b3-ae25-a7d4a939f188" (UID: "ba0e89f9-1992-44b3-ae25-a7d4a939f188"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.479023 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba0e89f9-1992-44b3-ae25-a7d4a939f188" (UID: "ba0e89f9-1992-44b3-ae25-a7d4a939f188"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.485535 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba0e89f9-1992-44b3-ae25-a7d4a939f188" (UID: "ba0e89f9-1992-44b3-ae25-a7d4a939f188"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.485650 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t" (OuterVolumeSpecName: "kube-api-access-qn57t") pod "ba0e89f9-1992-44b3-ae25-a7d4a939f188" (UID: "ba0e89f9-1992-44b3-ae25-a7d4a939f188"). InnerVolumeSpecName "kube-api-access-qn57t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579325 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8996k\" (UniqueName: \"kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579656 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579682 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579730 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579953 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579972 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579983 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn57t\" (UniqueName: \"kubernetes.io/projected/ba0e89f9-1992-44b3-ae25-a7d4a939f188-kube-api-access-qn57t\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.579994 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba0e89f9-1992-44b3-ae25-a7d4a939f188-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.580008 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba0e89f9-1992-44b3-ae25-a7d4a939f188-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.582008 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"629717da-142d-436b-bb10-642182966fd8","Type":"ContainerStarted","Data":"ffdf6d88caf10d6a54561c57816f1cdabb947464ff1075c00f34fb77d7b24ade"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.583325 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6475fd654c-7vpnr_4eea8f50-204c-4712-b8a9-206d2c67aa43/route-controller-manager/0.log" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.583385 4857 generic.go:334] "Generic (PLEG): container finished" podID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerID="827a5e2cf5b4973155f6d1e4f7fbd6b043b6c47cd3235b6ac9b75f8d58e1156e" exitCode=255 Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.583451 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" event={"ID":"4eea8f50-204c-4712-b8a9-206d2c67aa43","Type":"ContainerDied","Data":"827a5e2cf5b4973155f6d1e4f7fbd6b043b6c47cd3235b6ac9b75f8d58e1156e"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.584647 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6c25f42-fe96-46ed-999f-608fed536177","Type":"ContainerStarted","Data":"1557e49d20e2cbc8d01e814671c42cb0606c4adab532731f17601aaa65124f3c"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.586677 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" event={"ID":"287df787-86a7-4a56-b5a1-fb55b6bed91b","Type":"ContainerStarted","Data":"1e19efb8d3c40e0ef0eed3cda9a3e8d62af2eb599f42fe8f33a6c18af65497af"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.588769 4857 generic.go:334] "Generic (PLEG): container finished" podID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" containerID="5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d" exitCode=0 Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.588806 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" event={"ID":"ba0e89f9-1992-44b3-ae25-a7d4a939f188","Type":"ContainerDied","Data":"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.588842 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" event={"ID":"ba0e89f9-1992-44b3-ae25-a7d4a939f188","Type":"ContainerDied","Data":"e00e42a40fe7b95d24bec4c4e2672221dff7c2022146d71ed9cf36d196a873b5"} Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.588891 4857 scope.go:117] "RemoveContainer" containerID="5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.588885 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d565f68c9-5cqcw" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.589379 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.589424 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.602361 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=46.602315951 podStartE2EDuration="46.602315951s" podCreationTimestamp="2026-03-18 14:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:29.598566098 +0000 UTC m=+313.727694555" watchObservedRunningTime="2026-03-18 14:05:29.602315951 +0000 UTC m=+313.731444418" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.606988 4857 scope.go:117] "RemoveContainer" containerID="5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d" Mar 18 14:05:29 crc kubenswrapper[4857]: E0318 14:05:29.612070 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d\": container with ID starting with 5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d not found: ID does not exist" containerID="5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.612178 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d"} err="failed to get container status \"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d\": rpc error: code = NotFound desc = could not find container \"5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d\": container with ID starting with 5d5af75b35b68d65bc348e5bb24fd3d16be047f8ca214740d50496fc9b4d722d not found: ID does not exist" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.623359 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" podStartSLOduration=109.890596291 podStartE2EDuration="3m29.62333545s" podCreationTimestamp="2026-03-18 14:02:00 +0000 UTC" firstStartedPulling="2026-03-18 14:03:48.494414842 +0000 UTC m=+212.623543299" lastFinishedPulling="2026-03-18 14:05:28.227154001 +0000 UTC m=+312.356282458" observedRunningTime="2026-03-18 14:05:29.621090018 +0000 UTC m=+313.750218475" watchObservedRunningTime="2026-03-18 14:05:29.62333545 +0000 UTC m=+313.752463907" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.641305 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=52.641284295 podStartE2EDuration="52.641284295s" podCreationTimestamp="2026-03-18 14:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:29.639387722 +0000 UTC m=+313.768516179" watchObservedRunningTime="2026-03-18 14:05:29.641284295 +0000 UTC m=+313.770412752" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.656577 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.664099 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d565f68c9-5cqcw"] Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.681078 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.681476 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8996k\" (UniqueName: \"kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.681530 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.681551 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.681596 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.682832 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.683483 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.683637 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.685908 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.705171 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8996k\" (UniqueName: \"kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k\") pod \"controller-manager-fb67577d5-tcgnb\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:29 crc kubenswrapper[4857]: I0318 14:05:29.997736 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.065953 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6475fd654c-7vpnr_4eea8f50-204c-4712-b8a9-206d2c67aa43/route-controller-manager/0.log" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.066049 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.201718 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca\") pod \"4eea8f50-204c-4712-b8a9-206d2c67aa43\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.201954 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert\") pod \"4eea8f50-204c-4712-b8a9-206d2c67aa43\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.202021 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nxt\" (UniqueName: \"kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt\") pod \"4eea8f50-204c-4712-b8a9-206d2c67aa43\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.202055 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config\") pod \"4eea8f50-204c-4712-b8a9-206d2c67aa43\" (UID: \"4eea8f50-204c-4712-b8a9-206d2c67aa43\") " Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.203105 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca" (OuterVolumeSpecName: "client-ca") pod "4eea8f50-204c-4712-b8a9-206d2c67aa43" (UID: "4eea8f50-204c-4712-b8a9-206d2c67aa43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.203198 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config" (OuterVolumeSpecName: "config") pod "4eea8f50-204c-4712-b8a9-206d2c67aa43" (UID: "4eea8f50-204c-4712-b8a9-206d2c67aa43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.212724 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4eea8f50-204c-4712-b8a9-206d2c67aa43" (UID: "4eea8f50-204c-4712-b8a9-206d2c67aa43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.223870 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt" (OuterVolumeSpecName: "kube-api-access-p6nxt") pod "4eea8f50-204c-4712-b8a9-206d2c67aa43" (UID: "4eea8f50-204c-4712-b8a9-206d2c67aa43"). InnerVolumeSpecName "kube-api-access-p6nxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.303397 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.304203 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.304265 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eea8f50-204c-4712-b8a9-206d2c67aa43-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.304282 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nxt\" (UniqueName: \"kubernetes.io/projected/4eea8f50-204c-4712-b8a9-206d2c67aa43-kube-api-access-p6nxt\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.304296 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eea8f50-204c-4712-b8a9-206d2c67aa43-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:30 crc kubenswrapper[4857]: W0318 14:05:30.317551 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40ae098_b0a0_42ca_a02d_6d766ae12ca4.slice/crio-5f91c7c9a22f4e797dfc66ca08b9ac17eb06dc62d2e6c332ca889786e5b2f016 WatchSource:0}: Error finding container 5f91c7c9a22f4e797dfc66ca08b9ac17eb06dc62d2e6c332ca889786e5b2f016: Status 404 returned error can't find the container with id 5f91c7c9a22f4e797dfc66ca08b9ac17eb06dc62d2e6c332ca889786e5b2f016 Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.476099 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-13 02:58:01.540186494 +0000 UTC Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.476155 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6468h52m31.064034453s for next certificate rotation Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.596522 4857 generic.go:334] "Generic (PLEG): container finished" podID="287df787-86a7-4a56-b5a1-fb55b6bed91b" containerID="1e19efb8d3c40e0ef0eed3cda9a3e8d62af2eb599f42fe8f33a6c18af65497af" exitCode=0 Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.596616 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" event={"ID":"287df787-86a7-4a56-b5a1-fb55b6bed91b","Type":"ContainerDied","Data":"1e19efb8d3c40e0ef0eed3cda9a3e8d62af2eb599f42fe8f33a6c18af65497af"} Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.601010 4857 generic.go:334] "Generic (PLEG): container finished" podID="af5933af-d25b-4d7a-8fda-e95c340a38ac" containerID="4b364a8a1f55996f928000ab80476797f1d600e64460f3a63565d9f73b95965b" exitCode=0 Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.601038 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" event={"ID":"af5933af-d25b-4d7a-8fda-e95c340a38ac","Type":"ContainerDied","Data":"4b364a8a1f55996f928000ab80476797f1d600e64460f3a63565d9f73b95965b"} Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.602685 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" event={"ID":"c40ae098-b0a0-42ca-a02d-6d766ae12ca4","Type":"ContainerStarted","Data":"5f91c7c9a22f4e797dfc66ca08b9ac17eb06dc62d2e6c332ca889786e5b2f016"} Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.604636 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6475fd654c-7vpnr_4eea8f50-204c-4712-b8a9-206d2c67aa43/route-controller-manager/0.log" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.604709 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" event={"ID":"4eea8f50-204c-4712-b8a9-206d2c67aa43","Type":"ContainerDied","Data":"f62e5a3b11d150d108a0b27fb518081a0a07e8f4a45ae34eb3c1c5ae43d544f1"} Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.604735 4857 scope.go:117] "RemoveContainer" containerID="827a5e2cf5b4973155f6d1e4f7fbd6b043b6c47cd3235b6ac9b75f8d58e1156e" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.604893 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr" Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.620725 4857 generic.go:334] "Generic (PLEG): container finished" podID="d6c25f42-fe96-46ed-999f-608fed536177" containerID="1557e49d20e2cbc8d01e814671c42cb0606c4adab532731f17601aaa65124f3c" exitCode=0 Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.620810 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6c25f42-fe96-46ed-999f-608fed536177","Type":"ContainerDied","Data":"1557e49d20e2cbc8d01e814671c42cb0606c4adab532731f17601aaa65124f3c"} Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.652247 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:05:30 crc kubenswrapper[4857]: I0318 14:05:30.658038 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6475fd654c-7vpnr"] Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.171876 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" path="/var/lib/kubelet/pods/4eea8f50-204c-4712-b8a9-206d2c67aa43/volumes" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.174263 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba0e89f9-1992-44b3-ae25-a7d4a939f188" path="/var/lib/kubelet/pods/ba0e89f9-1992-44b3-ae25-a7d4a939f188/volumes" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.477791 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-27 10:51:23.840109229 +0000 UTC Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.477847 4857 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6092h45m52.362265257s for next certificate rotation Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.507732 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:31 crc kubenswrapper[4857]: E0318 14:05:31.508039 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerName="route-controller-manager" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.508059 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerName="route-controller-manager" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.508211 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eea8f50-204c-4712-b8a9-206d2c67aa43" containerName="route-controller-manager" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.508689 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.512042 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.512073 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.512042 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.512455 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.513024 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.513024 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.524342 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.531504 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mz9\" (UniqueName: \"kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.531592 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.531624 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.531658 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.627665 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" event={"ID":"c40ae098-b0a0-42ca-a02d-6d766ae12ca4","Type":"ContainerStarted","Data":"14fc70a5b5f778c1275635d703effd4390092a18cba49f7c37d48863095452cb"} Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.627923 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.633044 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.633091 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.633126 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.633196 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mz9\" (UniqueName: \"kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.634843 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.634900 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.639109 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.643476 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.650627 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" podStartSLOduration=13.65060383 podStartE2EDuration="13.65060383s" podCreationTimestamp="2026-03-18 14:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:31.649012916 +0000 UTC m=+315.778141373" watchObservedRunningTime="2026-03-18 14:05:31.65060383 +0000 UTC m=+315.779732287" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.652193 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mz9\" (UniqueName: \"kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9\") pod \"route-controller-manager-67d69cc98f-v2hqs\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.869336 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:31 crc kubenswrapper[4857]: I0318 14:05:31.884368 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" containerID="cri-o://ce51fcdf9cf0548a945e87b60767dc31f46ee0550c7f0be11cbfea3f3d39f720" gracePeriod=15 Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.255458 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.323790 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.354694 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch7hz\" (UniqueName: \"kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz\") pod \"af5933af-d25b-4d7a-8fda-e95c340a38ac\" (UID: \"af5933af-d25b-4d7a-8fda-e95c340a38ac\") " Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.354821 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzp8\" (UniqueName: \"kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8\") pod \"287df787-86a7-4a56-b5a1-fb55b6bed91b\" (UID: \"287df787-86a7-4a56-b5a1-fb55b6bed91b\") " Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.361728 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz" (OuterVolumeSpecName: "kube-api-access-ch7hz") pod "af5933af-d25b-4d7a-8fda-e95c340a38ac" (UID: "af5933af-d25b-4d7a-8fda-e95c340a38ac"). InnerVolumeSpecName "kube-api-access-ch7hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.362924 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8" (OuterVolumeSpecName: "kube-api-access-nmzp8") pod "287df787-86a7-4a56-b5a1-fb55b6bed91b" (UID: "287df787-86a7-4a56-b5a1-fb55b6bed91b"). InnerVolumeSpecName "kube-api-access-nmzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.397185 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.437567 4857 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gxtb9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.437645 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455573 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access\") pod \"d6c25f42-fe96-46ed-999f-608fed536177\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455632 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir\") pod \"d6c25f42-fe96-46ed-999f-608fed536177\" (UID: \"d6c25f42-fe96-46ed-999f-608fed536177\") " Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455743 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d6c25f42-fe96-46ed-999f-608fed536177" (UID: "d6c25f42-fe96-46ed-999f-608fed536177"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455960 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzp8\" (UniqueName: \"kubernetes.io/projected/287df787-86a7-4a56-b5a1-fb55b6bed91b-kube-api-access-nmzp8\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455974 4857 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6c25f42-fe96-46ed-999f-608fed536177-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.455989 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch7hz\" (UniqueName: \"kubernetes.io/projected/af5933af-d25b-4d7a-8fda-e95c340a38ac-kube-api-access-ch7hz\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.459514 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d6c25f42-fe96-46ed-999f-608fed536177" (UID: "d6c25f42-fe96-46ed-999f-608fed536177"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.557502 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6c25f42-fe96-46ed-999f-608fed536177-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.613769 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:32 crc kubenswrapper[4857]: W0318 14:05:32.616901 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a2aaabd_e76d_4045_b18b_1614c82be989.slice/crio-c41b4a8f8383b92a3795e760c03d93d06514fda8c9621f17f1d086664bb97ca8 WatchSource:0}: Error finding container c41b4a8f8383b92a3795e760c03d93d06514fda8c9621f17f1d086664bb97ca8: Status 404 returned error can't find the container with id c41b4a8f8383b92a3795e760c03d93d06514fda8c9621f17f1d086664bb97ca8 Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.644127 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.644271 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d6c25f42-fe96-46ed-999f-608fed536177","Type":"ContainerDied","Data":"f0d67d1f36443ddc94de7637d28ab02a97527d5278795b405dafaacf65dc18ac"} Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.644328 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d67d1f36443ddc94de7637d28ab02a97527d5278795b405dafaacf65dc18ac" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.646643 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" event={"ID":"287df787-86a7-4a56-b5a1-fb55b6bed91b","Type":"ContainerDied","Data":"da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03"} Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.646675 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da482858b9bded9938a3e03c532664f2caa33e93f168005bd8d45812dfb9da03" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.646791 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564042-j5cmc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.649864 4857 generic.go:334] "Generic (PLEG): container finished" podID="e8c4acb6-a177-4139-ba23-512a709d4033" containerID="ce51fcdf9cf0548a945e87b60767dc31f46ee0550c7f0be11cbfea3f3d39f720" exitCode=0 Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.649940 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" event={"ID":"e8c4acb6-a177-4139-ba23-512a709d4033","Type":"ContainerDied","Data":"ce51fcdf9cf0548a945e87b60767dc31f46ee0550c7f0be11cbfea3f3d39f720"} Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.651068 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" event={"ID":"af5933af-d25b-4d7a-8fda-e95c340a38ac","Type":"ContainerDied","Data":"feca2abe0447ec5d520e7c2a9b60a1003095c525f50f3e0c1cf39e3cdb7b8f13"} Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.651098 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feca2abe0447ec5d520e7c2a9b60a1003095c525f50f3e0c1cf39e3cdb7b8f13" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.651177 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564044-5r7zc" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.664274 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" event={"ID":"1a2aaabd-e76d-4045-b18b-1614c82be989","Type":"ContainerStarted","Data":"c41b4a8f8383b92a3795e760c03d93d06514fda8c9621f17f1d086664bb97ca8"} Mar 18 14:05:32 crc kubenswrapper[4857]: E0318 14:05:32.705107 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd6c25f42_fe96_46ed_999f_608fed536177.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.912174 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.954084 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.954167 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.954254 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:32 crc kubenswrapper[4857]: I0318 14:05:32.954321 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063684 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mskc7\" (UniqueName: \"kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063769 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063816 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063889 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063921 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.063960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064004 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064073 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064100 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064128 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064150 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064171 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064204 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir\") pod \"e8c4acb6-a177-4139-ba23-512a709d4033\" (UID: \"e8c4acb6-a177-4139-ba23-512a709d4033\") " Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.064914 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.065190 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.065232 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.065248 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.065293 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.070883 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.070866 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7" (OuterVolumeSpecName: "kube-api-access-mskc7") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "kube-api-access-mskc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.071248 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.071316 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.071617 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.074069 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.074276 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.074549 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.074803 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e8c4acb6-a177-4139-ba23-512a709d4033" (UID: "e8c4acb6-a177-4139-ba23-512a709d4033"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165639 4857 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e8c4acb6-a177-4139-ba23-512a709d4033-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165673 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mskc7\" (UniqueName: \"kubernetes.io/projected/e8c4acb6-a177-4139-ba23-512a709d4033-kube-api-access-mskc7\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165684 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165693 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165705 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165723 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165733 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165743 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165768 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165782 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165792 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165801 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165810 4857 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e8c4acb6-a177-4139-ba23-512a709d4033-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.165820 4857 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e8c4acb6-a177-4139-ba23-512a709d4033-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.671137 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" event={"ID":"e8c4acb6-a177-4139-ba23-512a709d4033","Type":"ContainerDied","Data":"11cff5bd51649ed8ca9e598a2383787a4e34e10bce36e1a7112c2c47a2c89d8d"} Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.671902 4857 scope.go:117] "RemoveContainer" containerID="ce51fcdf9cf0548a945e87b60767dc31f46ee0550c7f0be11cbfea3f3d39f720" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.671509 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gxtb9" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.673576 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" event={"ID":"1a2aaabd-e76d-4045-b18b-1614c82be989","Type":"ContainerStarted","Data":"09714e7c97e12913da7c85548b857de5bb179c9ca0d9790a8e5cb60af46c7d7d"} Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.674068 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.680288 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.699307 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" podStartSLOduration=15.699291069000001 podStartE2EDuration="15.699291069s" podCreationTimestamp="2026-03-18 14:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:33.696733329 +0000 UTC m=+317.825861786" watchObservedRunningTime="2026-03-18 14:05:33.699291069 +0000 UTC m=+317.828419526" Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.710101 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:05:33 crc kubenswrapper[4857]: I0318 14:05:33.716844 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gxtb9"] Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.177764 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" path="/var/lib/kubelet/pods/e8c4acb6-a177-4139-ba23-512a709d4033/volumes" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.513527 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f79475d48-ncfgv"] Mar 18 14:05:35 crc kubenswrapper[4857]: E0318 14:05:35.514016 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514062 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: E0318 14:05:35.514101 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514109 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: E0318 14:05:35.514119 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514129 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" Mar 18 14:05:35 crc kubenswrapper[4857]: E0318 14:05:35.514141 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6c25f42-fe96-46ed-999f-608fed536177" containerName="pruner" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514148 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6c25f42-fe96-46ed-999f-608fed536177" containerName="pruner" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514369 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514393 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c4acb6-a177-4139-ba23-512a709d4033" containerName="oauth-openshift" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514404 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" containerName="oc" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.514413 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6c25f42-fe96-46ed-999f-608fed536177" containerName="pruner" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.515103 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.521186 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.521436 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.521617 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522534 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522679 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522698 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522692 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522906 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.522977 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.523003 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.523186 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.526364 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.534206 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.540204 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.543916 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f79475d48-ncfgv"] Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.545049 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637256 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-error\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637337 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-policies\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637407 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-dir\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637443 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfn2g\" (UniqueName: \"kubernetes.io/projected/8c2aa0cb-1b55-4425-ac30-0369de76a057-kube-api-access-qfn2g\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637484 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-router-certs\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637546 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-session\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637638 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-service-ca\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637678 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637823 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637931 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.637992 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.638059 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-login\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.638087 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739438 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-login\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739565 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739638 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-error\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739723 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-policies\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.739846 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-dir\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.749409 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfn2g\" (UniqueName: \"kubernetes.io/projected/8c2aa0cb-1b55-4425-ac30-0369de76a057-kube-api-access-qfn2g\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.749522 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-dir\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.749694 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.749840 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-audit-policies\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750080 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-router-certs\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750164 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750739 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-session\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750823 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-service-ca\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750877 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750908 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.750946 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.751493 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-service-ca\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.751913 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.755830 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.756521 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.756603 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-router-certs\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.756655 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-session\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.757270 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-error\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.765064 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.768955 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-login\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.772102 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfn2g\" (UniqueName: \"kubernetes.io/projected/8c2aa0cb-1b55-4425-ac30-0369de76a057-kube-api-access-qfn2g\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.774496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c2aa0cb-1b55-4425-ac30-0369de76a057-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f79475d48-ncfgv\" (UID: \"8c2aa0cb-1b55-4425-ac30-0369de76a057\") " pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:35 crc kubenswrapper[4857]: I0318 14:05:35.836923 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.077089 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f79475d48-ncfgv"] Mar 18 14:05:36 crc kubenswrapper[4857]: W0318 14:05:36.082667 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2aa0cb_1b55_4425_ac30_0369de76a057.slice/crio-cff9f8c7f17ef9eaab066d3a6079750033fef6324d93231c2905dfa742fc7592 WatchSource:0}: Error finding container cff9f8c7f17ef9eaab066d3a6079750033fef6324d93231c2905dfa742fc7592: Status 404 returned error can't find the container with id cff9f8c7f17ef9eaab066d3a6079750033fef6324d93231c2905dfa742fc7592 Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.712744 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.713038 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.713174 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.718245 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.718493 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.723645 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" event={"ID":"8c2aa0cb-1b55-4425-ac30-0369de76a057","Type":"ContainerStarted","Data":"cff9f8c7f17ef9eaab066d3a6079750033fef6324d93231c2905dfa742fc7592"} Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.726158 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.732376 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.738035 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.738533 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.814217 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.816890 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.887536 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.978522 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.985961 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 18 14:05:36 crc kubenswrapper[4857]: I0318 14:05:36.993248 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:05:38 crc kubenswrapper[4857]: I0318 14:05:38.979469 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" event={"ID":"8c2aa0cb-1b55-4425-ac30-0369de76a057","Type":"ContainerStarted","Data":"2b8446d8d8d3e8191e29a2bcf3fca537abec08ed645b1d0fafab48986027acaf"} Mar 18 14:05:38 crc kubenswrapper[4857]: I0318 14:05:38.980510 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.106530 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.107629 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerName="controller-manager" containerID="cri-o://14fc70a5b5f778c1275635d703effd4390092a18cba49f7c37d48863095452cb" gracePeriod=30 Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.151824 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.151950 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.203621 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podStartSLOduration=33.203575311 podStartE2EDuration="33.203575311s" podCreationTimestamp="2026-03-18 14:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:39.203127699 +0000 UTC m=+323.332256156" watchObservedRunningTime="2026-03-18 14:05:39.203575311 +0000 UTC m=+323.332703768" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.624076 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.634102 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.634331 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" podUID="1a2aaabd-e76d-4045-b18b-1614c82be989" containerName="route-controller-manager" containerID="cri-o://09714e7c97e12913da7c85548b857de5bb179c9ca0d9790a8e5cb60af46c7d7d" gracePeriod=30 Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.650872 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.667649 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb942ab9-842d-4078-9789-2fe1788b4dfb-metrics-certs\") pod \"network-metrics-daemon-f7vgs\" (UID: \"eb942ab9-842d-4078-9789-2fe1788b4dfb\") " pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.798091 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.803883 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f7vgs" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.993577 4857 generic.go:334] "Generic (PLEG): container finished" podID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerID="14fc70a5b5f778c1275635d703effd4390092a18cba49f7c37d48863095452cb" exitCode=0 Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.993668 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" event={"ID":"c40ae098-b0a0-42ca-a02d-6d766ae12ca4","Type":"ContainerDied","Data":"14fc70a5b5f778c1275635d703effd4390092a18cba49f7c37d48863095452cb"} Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.999020 4857 patch_prober.go:28] interesting pod/controller-manager-fb67577d5-tcgnb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.999080 4857 generic.go:334] "Generic (PLEG): container finished" podID="1a2aaabd-e76d-4045-b18b-1614c82be989" containerID="09714e7c97e12913da7c85548b857de5bb179c9ca0d9790a8e5cb60af46c7d7d" exitCode=0 Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.999074 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Mar 18 14:05:39 crc kubenswrapper[4857]: I0318 14:05:39.999190 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" event={"ID":"1a2aaabd-e76d-4045-b18b-1614c82be989","Type":"ContainerDied","Data":"09714e7c97e12913da7c85548b857de5bb179c9ca0d9790a8e5cb60af46c7d7d"} Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.290297 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.339559 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.391732 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:05:40 crc kubenswrapper[4857]: E0318 14:05:40.392023 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerName="controller-manager" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.392041 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerName="controller-manager" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.392165 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" containerName="controller-manager" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.392579 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.420546 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444294 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8996k\" (UniqueName: \"kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k\") pod \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444412 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles\") pod \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444448 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert\") pod \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444504 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca\") pod \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444545 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config\") pod \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\" (UID: \"c40ae098-b0a0-42ca-a02d-6d766ae12ca4\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444846 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclmc\" (UniqueName: \"kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444885 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444910 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.444999 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.445036 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.445395 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca" (OuterVolumeSpecName: "client-ca") pod "c40ae098-b0a0-42ca-a02d-6d766ae12ca4" (UID: "c40ae098-b0a0-42ca-a02d-6d766ae12ca4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.445634 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c40ae098-b0a0-42ca-a02d-6d766ae12ca4" (UID: "c40ae098-b0a0-42ca-a02d-6d766ae12ca4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.445827 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config" (OuterVolumeSpecName: "config") pod "c40ae098-b0a0-42ca-a02d-6d766ae12ca4" (UID: "c40ae098-b0a0-42ca-a02d-6d766ae12ca4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.448994 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k" (OuterVolumeSpecName: "kube-api-access-8996k") pod "c40ae098-b0a0-42ca-a02d-6d766ae12ca4" (UID: "c40ae098-b0a0-42ca-a02d-6d766ae12ca4"). InnerVolumeSpecName "kube-api-access-8996k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.452649 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c40ae098-b0a0-42ca-a02d-6d766ae12ca4" (UID: "c40ae098-b0a0-42ca-a02d-6d766ae12ca4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.547597 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.547903 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pclmc\" (UniqueName: \"kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.547923 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.547992 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548011 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548076 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548088 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8996k\" (UniqueName: \"kubernetes.io/projected/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-kube-api-access-8996k\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548098 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548106 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.548116 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c40ae098-b0a0-42ca-a02d-6d766ae12ca4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.549278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.549858 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.554253 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.555507 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.575697 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pclmc\" (UniqueName: \"kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc\") pod \"controller-manager-7f95f94f6-7dg75\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.660836 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.732214 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.754193 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert\") pod \"1a2aaabd-e76d-4045-b18b-1614c82be989\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.754258 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config\") pod \"1a2aaabd-e76d-4045-b18b-1614c82be989\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.754324 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mz9\" (UniqueName: \"kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9\") pod \"1a2aaabd-e76d-4045-b18b-1614c82be989\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.754436 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca\") pod \"1a2aaabd-e76d-4045-b18b-1614c82be989\" (UID: \"1a2aaabd-e76d-4045-b18b-1614c82be989\") " Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.755505 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a2aaabd-e76d-4045-b18b-1614c82be989" (UID: "1a2aaabd-e76d-4045-b18b-1614c82be989"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.756313 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config" (OuterVolumeSpecName: "config") pod "1a2aaabd-e76d-4045-b18b-1614c82be989" (UID: "1a2aaabd-e76d-4045-b18b-1614c82be989"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.759949 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a2aaabd-e76d-4045-b18b-1614c82be989" (UID: "1a2aaabd-e76d-4045-b18b-1614c82be989"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.761344 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9" (OuterVolumeSpecName: "kube-api-access-27mz9") pod "1a2aaabd-e76d-4045-b18b-1614c82be989" (UID: "1a2aaabd-e76d-4045-b18b-1614c82be989"). InnerVolumeSpecName "kube-api-access-27mz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.856180 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a2aaabd-e76d-4045-b18b-1614c82be989-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.856208 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.856218 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mz9\" (UniqueName: \"kubernetes.io/projected/1a2aaabd-e76d-4045-b18b-1614c82be989-kube-api-access-27mz9\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.856229 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a2aaabd-e76d-4045-b18b-1614c82be989-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.926140 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:05:40 crc kubenswrapper[4857]: W0318 14:05:40.933009 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff58b69_1f7a_4ba8_a41a_94ff140f68be.slice/crio-aea7313b1c48f85a464a2eaeadbf288d7dfa45a3dc52427d096cc143a56edc66 WatchSource:0}: Error finding container aea7313b1c48f85a464a2eaeadbf288d7dfa45a3dc52427d096cc143a56edc66: Status 404 returned error can't find the container with id aea7313b1c48f85a464a2eaeadbf288d7dfa45a3dc52427d096cc143a56edc66 Mar 18 14:05:40 crc kubenswrapper[4857]: I0318 14:05:40.938934 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f7vgs"] Mar 18 14:05:40 crc kubenswrapper[4857]: W0318 14:05:40.948027 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb942ab9_842d_4078_9789_2fe1788b4dfb.slice/crio-5788c00778e1e13c76ddf6a569011e527eadb1de9e7bb40cc92b65590739de6b WatchSource:0}: Error finding container 5788c00778e1e13c76ddf6a569011e527eadb1de9e7bb40cc92b65590739de6b: Status 404 returned error can't find the container with id 5788c00778e1e13c76ddf6a569011e527eadb1de9e7bb40cc92b65590739de6b Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.012068 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad02f04d6ecb7cdadb2210a4af87d50fd7bfac1800f57463a92735ee3fca9f01"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.012137 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"62172b84e4ced91d20968a1a9046df5a4494e2a39a0eb19210a04a73addbf8ca"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.014712 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f8e4c8e55b360659ac76438d6ff8f8d3aceb8625620d58161c8996d8c7b1bfb4"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.014748 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7e5636b213e9e070ff9ec0f4367d8e695d80e0221eb17a18122c901e43c3e004"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.016881 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.017131 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fb67577d5-tcgnb" event={"ID":"c40ae098-b0a0-42ca-a02d-6d766ae12ca4","Type":"ContainerDied","Data":"5f91c7c9a22f4e797dfc66ca08b9ac17eb06dc62d2e6c332ca889786e5b2f016"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.017243 4857 scope.go:117] "RemoveContainer" containerID="14fc70a5b5f778c1275635d703effd4390092a18cba49f7c37d48863095452cb" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.018382 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" event={"ID":"dff58b69-1f7a-4ba8-a41a-94ff140f68be","Type":"ContainerStarted","Data":"aea7313b1c48f85a464a2eaeadbf288d7dfa45a3dc52427d096cc143a56edc66"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.024181 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" event={"ID":"1a2aaabd-e76d-4045-b18b-1614c82be989","Type":"ContainerDied","Data":"c41b4a8f8383b92a3795e760c03d93d06514fda8c9621f17f1d086664bb97ca8"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.024355 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.040239 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerStarted","Data":"4606e4f0f09d379b1168ce9bc9679bac243362b7ff31ffce124bbe5fdcebf653"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.044097 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5c1c3bb9f51124deb4bfc6642b6c7b7b2a19fe2f97ab7e51bb41e5abd79f88c5"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.047049 4857 scope.go:117] "RemoveContainer" containerID="09714e7c97e12913da7c85548b857de5bb179c9ca0d9790a8e5cb60af46c7d7d" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.049668 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" event={"ID":"eb942ab9-842d-4078-9789-2fe1788b4dfb","Type":"ContainerStarted","Data":"5788c00778e1e13c76ddf6a569011e527eadb1de9e7bb40cc92b65590739de6b"} Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.069118 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.072342 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fb67577d5-tcgnb"] Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.086867 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.087138 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d69cc98f-v2hqs"] Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.349907 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2aaabd-e76d-4045-b18b-1614c82be989" path="/var/lib/kubelet/pods/1a2aaabd-e76d-4045-b18b-1614c82be989/volumes" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.351073 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40ae098-b0a0-42ca-a02d-6d766ae12ca4" path="/var/lib/kubelet/pods/c40ae098-b0a0-42ca-a02d-6d766ae12ca4/volumes" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.634442 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:05:41 crc kubenswrapper[4857]: E0318 14:05:41.634723 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2aaabd-e76d-4045-b18b-1614c82be989" containerName="route-controller-manager" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.634746 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2aaabd-e76d-4045-b18b-1614c82be989" containerName="route-controller-manager" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.634955 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2aaabd-e76d-4045-b18b-1614c82be989" containerName="route-controller-manager" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.637184 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.654800 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.655262 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.655555 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.655804 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.656061 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.656273 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.673809 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.896707 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.896792 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.896844 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw5f\" (UniqueName: \"kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.896862 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.998149 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.998228 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.998289 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.998317 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw5f\" (UniqueName: \"kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.999466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:41 crc kubenswrapper[4857]: I0318 14:05:41.999648 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.006474 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.029933 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw5f\" (UniqueName: \"kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f\") pod \"route-controller-manager-798f9868c-zh9km\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.101196 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" event={"ID":"dff58b69-1f7a-4ba8-a41a-94ff140f68be","Type":"ContainerStarted","Data":"4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0"} Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.101747 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.108532 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.111903 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c35a08b2d1a0029369a055414138aa589842f636db386a6eefc7a20e526ebbf9"} Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.118243 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" event={"ID":"eb942ab9-842d-4078-9789-2fe1788b4dfb","Type":"ContainerStarted","Data":"7f06035d7c68f5bffc2655261d14476be505111d656e8f541f8d4dc2e49cabb6"} Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.129593 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" podStartSLOduration=3.129564207 podStartE2EDuration="3.129564207s" podCreationTimestamp="2026-03-18 14:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:42.126699172 +0000 UTC m=+326.255827629" watchObservedRunningTime="2026-03-18 14:05:42.129564207 +0000 UTC m=+326.258692664" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.329264 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:42 crc kubenswrapper[4857]: E0318 14:05:42.369953 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.963607 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.963725 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.963812 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:42 crc kubenswrapper[4857]: I0318 14:05:42.963816 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:45 crc kubenswrapper[4857]: I0318 14:05:45.408000 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" start-of-body= Mar 18 14:05:45 crc kubenswrapper[4857]: I0318 14:05:45.408472 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" Mar 18 14:05:45 crc kubenswrapper[4857]: I0318 14:05:45.420400 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:45 crc kubenswrapper[4857]: I0318 14:05:45.420570 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:47 crc kubenswrapper[4857]: I0318 14:05:46.789302 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:47 crc kubenswrapper[4857]: I0318 14:05:46.790484 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.887608 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.887725 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.888014 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.888060 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.888303 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:47.888350 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.226201 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.237642 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.227608 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.237844 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.232850 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.237906 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.232937 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.237969 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:05:48 crc kubenswrapper[4857]: E0318 14:05:48.712867 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.312s" Mar 18 14:05:48 crc kubenswrapper[4857]: I0318 14:05:48.725449 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:05:49 crc kubenswrapper[4857]: I0318 14:05:49.899239 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerStarted","Data":"d7ddbdb7031b4caadd5dab8ece79a91eba0fc712310a5dc2dbe7b4dd5ea6d22c"} Mar 18 14:05:49 crc kubenswrapper[4857]: I0318 14:05:49.904779 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerStarted","Data":"f95f63311a0844bf9d5258bf521e469709761ea046a292d89b102642348e2dde"} Mar 18 14:05:49 crc kubenswrapper[4857]: I0318 14:05:49.920105 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerStarted","Data":"c1f3b2bc4264a1b3a1f09df5ab9848cfb7070676e148993498bb7950873b109d"} Mar 18 14:05:50 crc kubenswrapper[4857]: I0318 14:05:50.341633 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:05:50 crc kubenswrapper[4857]: W0318 14:05:50.359791 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f319dc5_637c_474f_896f_0133fbb6971f.slice/crio-6daebd5d254ee45882ea1507d68e76f9ac73d61bae8d884cea9bd692a7f3d62e WatchSource:0}: Error finding container 6daebd5d254ee45882ea1507d68e76f9ac73d61bae8d884cea9bd692a7f3d62e: Status 404 returned error can't find the container with id 6daebd5d254ee45882ea1507d68e76f9ac73d61bae8d884cea9bd692a7f3d62e Mar 18 14:05:51 crc kubenswrapper[4857]: I0318 14:05:51.094317 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" event={"ID":"1f319dc5-637c-474f-896f-0133fbb6971f","Type":"ContainerStarted","Data":"6daebd5d254ee45882ea1507d68e76f9ac73d61bae8d884cea9bd692a7f3d62e"} Mar 18 14:05:51 crc kubenswrapper[4857]: I0318 14:05:51.097928 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f7vgs" event={"ID":"eb942ab9-842d-4078-9789-2fe1788b4dfb","Type":"ContainerStarted","Data":"3af9f6481e67345385e77112c9e42a0f7d6780c5a245db9d89376de788c62d80"} Mar 18 14:05:51 crc kubenswrapper[4857]: I0318 14:05:51.251629 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-f7vgs" podStartSLOduration=285.251568617 podStartE2EDuration="4m45.251568617s" podCreationTimestamp="2026-03-18 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:51.251536016 +0000 UTC m=+335.380664473" watchObservedRunningTime="2026-03-18 14:05:51.251568617 +0000 UTC m=+335.380697074" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.109019 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" event={"ID":"1f319dc5-637c-474f-896f-0133fbb6971f","Type":"ContainerStarted","Data":"41925f65df943a3a2bf45f7da2f1da2394d62b61156e064250e86d1436947a6a"} Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.132795 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" podStartSLOduration=13.132774604 podStartE2EDuration="13.132774604s" podCreationTimestamp="2026-03-18 14:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:05:52.129312963 +0000 UTC m=+336.258441430" watchObservedRunningTime="2026-03-18 14:05:52.132774604 +0000 UTC m=+336.261903061" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.330419 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.842319 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.975116 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.975178 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.975237 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.975797 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"995a79c7a78c4cbfb65584c07d9dbbbed9d22ddc43ab2c793e6cc11dd2a7edc8"} pod="openshift-console/downloads-7954f5f757-gvkpz" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.975868 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" containerID="cri-o://995a79c7a78c4cbfb65584c07d9dbbbed9d22ddc43ab2c793e6cc11dd2a7edc8" gracePeriod=2 Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.984291 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.984368 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.985350 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:52 crc kubenswrapper[4857]: I0318 14:05:52.985415 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:54 crc kubenswrapper[4857]: I0318 14:05:54.163669 4857 generic.go:334] "Generic (PLEG): container finished" podID="ef638f17-5999-467e-b170-8ef20068e451" containerID="995a79c7a78c4cbfb65584c07d9dbbbed9d22ddc43ab2c793e6cc11dd2a7edc8" exitCode=0 Mar 18 14:05:54 crc kubenswrapper[4857]: I0318 14:05:54.163745 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerDied","Data":"995a79c7a78c4cbfb65584c07d9dbbbed9d22ddc43ab2c793e6cc11dd2a7edc8"} Mar 18 14:05:54 crc kubenswrapper[4857]: I0318 14:05:54.164352 4857 scope.go:117] "RemoveContainer" containerID="9cde9c5776bcb432dcbd8afaa0a1602aafb3e49e07f778e440be0e091bce12ed" Mar 18 14:05:55 crc kubenswrapper[4857]: I0318 14:05:55.206930 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerDied","Data":"f95f63311a0844bf9d5258bf521e469709761ea046a292d89b102642348e2dde"} Mar 18 14:05:55 crc kubenswrapper[4857]: I0318 14:05:55.206927 4857 generic.go:334] "Generic (PLEG): container finished" podID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerID="f95f63311a0844bf9d5258bf521e469709761ea046a292d89b102642348e2dde" exitCode=0 Mar 18 14:05:55 crc kubenswrapper[4857]: I0318 14:05:55.229310 4857 generic.go:334] "Generic (PLEG): container finished" podID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerID="c1f3b2bc4264a1b3a1f09df5ab9848cfb7070676e148993498bb7950873b109d" exitCode=0 Mar 18 14:05:55 crc kubenswrapper[4857]: I0318 14:05:55.229373 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerDied","Data":"c1f3b2bc4264a1b3a1f09df5ab9848cfb7070676e148993498bb7950873b109d"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.323886 4857 generic.go:334] "Generic (PLEG): container finished" podID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerID="4606e4f0f09d379b1168ce9bc9679bac243362b7ff31ffce124bbe5fdcebf653" exitCode=0 Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.324029 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerDied","Data":"4606e4f0f09d379b1168ce9bc9679bac243362b7ff31ffce124bbe5fdcebf653"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.326325 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gvkpz" event={"ID":"ef638f17-5999-467e-b170-8ef20068e451","Type":"ContainerStarted","Data":"ffab32adf9ea237ada93f1589aad4500cf612df27f153763b3e1025b3fb7a471"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.330381 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerStarted","Data":"a2fac1f4719481063e8bce358ff7802a0ec5f434d58d46b223aa0997366bdd02"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.338518 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerStarted","Data":"0a37bd8d36dac9ffd6a72634e06cb27c905c873c0ce0105b4cc7a2fbf50c14b5"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.342315 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerStarted","Data":"6fd660858e01b81d2306215fbd06a9be62b6b64347d25f39e974f0ac2756598a"} Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.347008 4857 generic.go:334] "Generic (PLEG): container finished" podID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerID="d7ddbdb7031b4caadd5dab8ece79a91eba0fc712310a5dc2dbe7b4dd5ea6d22c" exitCode=0 Mar 18 14:05:56 crc kubenswrapper[4857]: I0318 14:05:56.347115 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerDied","Data":"d7ddbdb7031b4caadd5dab8ece79a91eba0fc712310a5dc2dbe7b4dd5ea6d22c"} Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.434853 4857 generic.go:334] "Generic (PLEG): container finished" podID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerID="0a37bd8d36dac9ffd6a72634e06cb27c905c873c0ce0105b4cc7a2fbf50c14b5" exitCode=0 Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.435370 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerDied","Data":"0a37bd8d36dac9ffd6a72634e06cb27c905c873c0ce0105b4cc7a2fbf50c14b5"} Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.442020 4857 generic.go:334] "Generic (PLEG): container finished" podID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerID="a2fac1f4719481063e8bce358ff7802a0ec5f434d58d46b223aa0997366bdd02" exitCode=0 Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.442362 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerDied","Data":"a2fac1f4719481063e8bce358ff7802a0ec5f434d58d46b223aa0997366bdd02"} Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.442760 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.443588 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:57 crc kubenswrapper[4857]: I0318 14:05:57.443689 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.453102 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.453226 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.679280 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.679684 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerName="controller-manager" containerID="cri-o://4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0" gracePeriod=30 Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.706238 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:05:58 crc kubenswrapper[4857]: I0318 14:05:58.706497 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" podUID="1f319dc5-637c-474f-896f-0133fbb6971f" containerName="route-controller-manager" containerID="cri-o://41925f65df943a3a2bf45f7da2f1da2394d62b61156e064250e86d1436947a6a" gracePeriod=30 Mar 18 14:05:58 crc kubenswrapper[4857]: E0318 14:05:58.854969 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff58b69_1f7a_4ba8_a41a_94ff140f68be.slice/crio-4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.234163 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564046-slwnd"] Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.236273 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.256410 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.256828 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.257041 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.263448 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564046-slwnd"] Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.290088 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfng\" (UniqueName: \"kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng\") pod \"auto-csr-approver-29564046-slwnd\" (UID: \"ca33260b-e859-4d77-9509-3e08e46be7f1\") " pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.393157 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxfng\" (UniqueName: \"kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng\") pod \"auto-csr-approver-29564046-slwnd\" (UID: \"ca33260b-e859-4d77-9509-3e08e46be7f1\") " pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.421715 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxfng\" (UniqueName: \"kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng\") pod \"auto-csr-approver-29564046-slwnd\" (UID: \"ca33260b-e859-4d77-9509-3e08e46be7f1\") " pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.673319 4857 generic.go:334] "Generic (PLEG): container finished" podID="1f319dc5-637c-474f-896f-0133fbb6971f" containerID="41925f65df943a3a2bf45f7da2f1da2394d62b61156e064250e86d1436947a6a" exitCode=0 Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.673403 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" event={"ID":"1f319dc5-637c-474f-896f-0133fbb6971f","Type":"ContainerDied","Data":"41925f65df943a3a2bf45f7da2f1da2394d62b61156e064250e86d1436947a6a"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.677055 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerStarted","Data":"f810ec1ba2d6d7aa7a6c3de2f8d60f311c51ed09a1b1feea921bce8272e07623"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.684931 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerStarted","Data":"4a63de8df929d1c369414c05e0778aa673bc2c625dc4e2d7864795bab4da30d3"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.689113 4857 generic.go:334] "Generic (PLEG): container finished" podID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerID="4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0" exitCode=0 Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.689176 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" event={"ID":"dff58b69-1f7a-4ba8-a41a-94ff140f68be","Type":"ContainerDied","Data":"4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.694411 4857 generic.go:334] "Generic (PLEG): container finished" podID="a7272920-8e13-4414-8a32-dfea84d2460f" containerID="6fd660858e01b81d2306215fbd06a9be62b6b64347d25f39e974f0ac2756598a" exitCode=0 Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.694465 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerDied","Data":"6fd660858e01b81d2306215fbd06a9be62b6b64347d25f39e974f0ac2756598a"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.704322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerStarted","Data":"1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.711977 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c89xj" podStartSLOduration=8.109888579 podStartE2EDuration="1m53.711936163s" podCreationTimestamp="2026-03-18 14:04:07 +0000 UTC" firstStartedPulling="2026-03-18 14:04:14.054607266 +0000 UTC m=+238.183735723" lastFinishedPulling="2026-03-18 14:05:59.65665485 +0000 UTC m=+343.785783307" observedRunningTime="2026-03-18 14:06:00.70455452 +0000 UTC m=+344.833682987" watchObservedRunningTime="2026-03-18 14:06:00.711936163 +0000 UTC m=+344.841064620" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.717653 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.741156 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerStarted","Data":"9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.742198 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzfl4" podStartSLOduration=11.128123361 podStartE2EDuration="1m56.742165212s" podCreationTimestamp="2026-03-18 14:04:04 +0000 UTC" firstStartedPulling="2026-03-18 14:04:14.028383658 +0000 UTC m=+238.157512115" lastFinishedPulling="2026-03-18 14:05:59.642425509 +0000 UTC m=+343.771553966" observedRunningTime="2026-03-18 14:06:00.740852688 +0000 UTC m=+344.869981145" watchObservedRunningTime="2026-03-18 14:06:00.742165212 +0000 UTC m=+344.871293669" Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.746939 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerStarted","Data":"a6ea4af12158e67b8c6b7d32cff44f35f03dcd46f7af116e30ab61bd92f7596c"} Mar 18 14:06:00 crc kubenswrapper[4857]: I0318 14:06:00.787169 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerStarted","Data":"ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81"} Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.061487 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cghkz" podStartSLOduration=11.376011632 podStartE2EDuration="1m57.061409667s" podCreationTimestamp="2026-03-18 14:04:04 +0000 UTC" firstStartedPulling="2026-03-18 14:04:14.058803691 +0000 UTC m=+238.187932148" lastFinishedPulling="2026-03-18 14:05:59.744201726 +0000 UTC m=+343.873330183" observedRunningTime="2026-03-18 14:06:01.060883793 +0000 UTC m=+345.190012260" watchObservedRunningTime="2026-03-18 14:06:01.061409667 +0000 UTC m=+345.190538134" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.169707 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2g48f" podStartSLOduration=9.220474632 podStartE2EDuration="1m55.169671623s" podCreationTimestamp="2026-03-18 14:04:06 +0000 UTC" firstStartedPulling="2026-03-18 14:04:14.036456209 +0000 UTC m=+238.165584666" lastFinishedPulling="2026-03-18 14:05:59.9856532 +0000 UTC m=+344.114781657" observedRunningTime="2026-03-18 14:06:01.167582659 +0000 UTC m=+345.296711116" watchObservedRunningTime="2026-03-18 14:06:01.169671623 +0000 UTC m=+345.298800080" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.192551 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q8pg8" podStartSLOduration=12.621233665 podStartE2EDuration="1m58.19252887s" podCreationTimestamp="2026-03-18 14:04:03 +0000 UTC" firstStartedPulling="2026-03-18 14:04:13.965565318 +0000 UTC m=+238.094693775" lastFinishedPulling="2026-03-18 14:05:59.536860523 +0000 UTC m=+343.665988980" observedRunningTime="2026-03-18 14:06:01.190129258 +0000 UTC m=+345.319257715" watchObservedRunningTime="2026-03-18 14:06:01.19252887 +0000 UTC m=+345.321657327" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.655743 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.666602 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.878073 4857 patch_prober.go:28] interesting pod/controller-manager-7f95f94f6-7dg75 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: i/o timeout" start-of-body= Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.878284 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: i/o timeout" Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987185 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config\") pod \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987276 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert\") pod \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987311 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsw5f\" (UniqueName: \"kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f\") pod \"1f319dc5-637c-474f-896f-0133fbb6971f\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987332 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca\") pod \"1f319dc5-637c-474f-896f-0133fbb6971f\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987355 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pclmc\" (UniqueName: \"kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc\") pod \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987376 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles\") pod \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987434 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config\") pod \"1f319dc5-637c-474f-896f-0133fbb6971f\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987470 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca\") pod \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\" (UID: \"dff58b69-1f7a-4ba8-a41a-94ff140f68be\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.987497 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert\") pod \"1f319dc5-637c-474f-896f-0133fbb6971f\" (UID: \"1f319dc5-637c-474f-896f-0133fbb6971f\") " Mar 18 14:06:01 crc kubenswrapper[4857]: I0318 14:06:01.991762 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dff58b69-1f7a-4ba8-a41a-94ff140f68be" (UID: "dff58b69-1f7a-4ba8-a41a-94ff140f68be"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.001395 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config" (OuterVolumeSpecName: "config") pod "1f319dc5-637c-474f-896f-0133fbb6971f" (UID: "1f319dc5-637c-474f-896f-0133fbb6971f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.001624 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config" (OuterVolumeSpecName: "config") pod "dff58b69-1f7a-4ba8-a41a-94ff140f68be" (UID: "dff58b69-1f7a-4ba8-a41a-94ff140f68be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.002199 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca" (OuterVolumeSpecName: "client-ca") pod "dff58b69-1f7a-4ba8-a41a-94ff140f68be" (UID: "dff58b69-1f7a-4ba8-a41a-94ff140f68be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.008323 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c867bfcc4-nc2bq"] Mar 18 14:06:02 crc kubenswrapper[4857]: E0318 14:06:02.011349 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerName="controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.011397 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerName="controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: E0318 14:06:02.011430 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f319dc5-637c-474f-896f-0133fbb6971f" containerName="route-controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.011436 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f319dc5-637c-474f-896f-0133fbb6971f" containerName="route-controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.011650 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f319dc5-637c-474f-896f-0133fbb6971f" containerName="route-controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.011668 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" containerName="controller-manager" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.012684 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.022196 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f319dc5-637c-474f-896f-0133fbb6971f" (UID: "1f319dc5-637c-474f-896f-0133fbb6971f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.023130 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerStarted","Data":"87c3ff8cbfd888dbd57bc57b8da772bd5ff6cc39f6d4d059acc010408f91feca"} Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.026712 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c867bfcc4-nc2bq"] Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.034638 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f319dc5-637c-474f-896f-0133fbb6971f" (UID: "1f319dc5-637c-474f-896f-0133fbb6971f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.035519 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" event={"ID":"dff58b69-1f7a-4ba8-a41a-94ff140f68be","Type":"ContainerDied","Data":"aea7313b1c48f85a464a2eaeadbf288d7dfa45a3dc52427d096cc143a56edc66"} Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.035622 4857 scope.go:117] "RemoveContainer" containerID="4005932ee2d5bcc5a6b7b2f5f75efa58f9837259bfdb64d9d466357dc42bb3e0" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.035923 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f95f94f6-7dg75" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.038124 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc" (OuterVolumeSpecName: "kube-api-access-pclmc") pod "dff58b69-1f7a-4ba8-a41a-94ff140f68be" (UID: "dff58b69-1f7a-4ba8-a41a-94ff140f68be"). InnerVolumeSpecName "kube-api-access-pclmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.043369 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dff58b69-1f7a-4ba8-a41a-94ff140f68be" (UID: "dff58b69-1f7a-4ba8-a41a-94ff140f68be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.144538 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f" (OuterVolumeSpecName: "kube-api-access-fsw5f") pod "1f319dc5-637c-474f-896f-0133fbb6971f" (UID: "1f319dc5-637c-474f-896f-0133fbb6971f"). InnerVolumeSpecName "kube-api-access-fsw5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.146698 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l9sbh" podStartSLOduration=9.925252767 podStartE2EDuration="1m56.14666381s" podCreationTimestamp="2026-03-18 14:04:06 +0000 UTC" firstStartedPulling="2026-03-18 14:04:13.978405969 +0000 UTC m=+238.107534436" lastFinishedPulling="2026-03-18 14:06:00.199817022 +0000 UTC m=+344.328945479" observedRunningTime="2026-03-18 14:06:02.143881707 +0000 UTC m=+346.273010174" watchObservedRunningTime="2026-03-18 14:06:02.14666381 +0000 UTC m=+346.275792267" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.162595 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.162860 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.162899 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pclmc\" (UniqueName: \"kubernetes.io/projected/dff58b69-1f7a-4ba8-a41a-94ff140f68be-kube-api-access-pclmc\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.163091 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km" event={"ID":"1f319dc5-637c-474f-896f-0133fbb6971f","Type":"ContainerDied","Data":"6daebd5d254ee45882ea1507d68e76f9ac73d61bae8d884cea9bd692a7f3d62e"} Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.170401 4857 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.170730 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f319dc5-637c-474f-896f-0133fbb6971f-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.190736 4857 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-client-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.190790 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f319dc5-637c-474f-896f-0133fbb6971f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.190802 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff58b69-1f7a-4ba8-a41a-94ff140f68be-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.190812 4857 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff58b69-1f7a-4ba8-a41a-94ff140f68be-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.203454 4857 scope.go:117] "RemoveContainer" containerID="41925f65df943a3a2bf45f7da2f1da2394d62b61156e064250e86d1436947a6a" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.294368 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-client-ca\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.294420 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8q7g\" (UniqueName: \"kubernetes.io/projected/3d4741b7-1f3f-405d-b675-d0141044421a-kube-api-access-j8q7g\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.294467 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-proxy-ca-bundles\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.294871 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-config\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.295198 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4741b7-1f3f-405d-b675-d0141044421a-serving-cert\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.295422 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsw5f\" (UniqueName: \"kubernetes.io/projected/1f319dc5-637c-474f-896f-0133fbb6971f-kube-api-access-fsw5f\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.316825 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564046-slwnd"] Mar 18 14:06:02 crc kubenswrapper[4857]: W0318 14:06:02.334448 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca33260b_e859_4d77_9509_3e08e46be7f1.slice/crio-fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482 WatchSource:0}: Error finding container fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482: Status 404 returned error can't find the container with id fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482 Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.369392 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.373156 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f95f94f6-7dg75"] Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.395825 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4741b7-1f3f-405d-b675-d0141044421a-serving-cert\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.395896 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-client-ca\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.395930 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8q7g\" (UniqueName: \"kubernetes.io/projected/3d4741b7-1f3f-405d-b675-d0141044421a-kube-api-access-j8q7g\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.395988 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-proxy-ca-bundles\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.396044 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-config\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.398541 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-proxy-ca-bundles\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.399844 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-client-ca\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.401348 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4741b7-1f3f-405d-b675-d0141044421a-config\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.410811 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d4741b7-1f3f-405d-b675-d0141044421a-serving-cert\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.424018 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8q7g\" (UniqueName: \"kubernetes.io/projected/3d4741b7-1f3f-405d-b675-d0141044421a-kube-api-access-j8q7g\") pod \"controller-manager-c867bfcc4-nc2bq\" (UID: \"3d4741b7-1f3f-405d-b675-d0141044421a\") " pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.502062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.523570 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.527505 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-798f9868c-zh9km"] Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.981187 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.981661 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.982452 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:06:02 crc kubenswrapper[4857]: I0318 14:06:02.982489 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:06:03 crc kubenswrapper[4857]: I0318 14:06:03.053498 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c867bfcc4-nc2bq"] Mar 18 14:06:03 crc kubenswrapper[4857]: I0318 14:06:03.191472 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f319dc5-637c-474f-896f-0133fbb6971f" path="/var/lib/kubelet/pods/1f319dc5-637c-474f-896f-0133fbb6971f/volumes" Mar 18 14:06:03 crc kubenswrapper[4857]: I0318 14:06:03.192293 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff58b69-1f7a-4ba8-a41a-94ff140f68be" path="/var/lib/kubelet/pods/dff58b69-1f7a-4ba8-a41a-94ff140f68be/volumes" Mar 18 14:06:03 crc kubenswrapper[4857]: I0318 14:06:03.195865 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerStarted","Data":"a82708ca5ca79be7cad0e541f386c34482528ec65d8ae5ab882ee05e4ce3b406"} Mar 18 14:06:03 crc kubenswrapper[4857]: I0318 14:06:03.197359 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564046-slwnd" event={"ID":"ca33260b-e859-4d77-9509-3e08e46be7f1","Type":"ContainerStarted","Data":"fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482"} Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.872194 4857 generic.go:334] "Generic (PLEG): container finished" podID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerID="4a63de8df929d1c369414c05e0778aa673bc2c625dc4e2d7864795bab4da30d3" exitCode=0 Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.872290 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerDied","Data":"4a63de8df929d1c369414c05e0778aa673bc2c625dc4e2d7864795bab4da30d3"} Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.875428 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2"] Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.876679 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.890881 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.891328 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.891534 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.891798 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.892014 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.892313 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.928302 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2"] Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.957156 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-config\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.957251 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d61789c-ee3d-4aff-99a1-592b91b773c6-serving-cert\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.957305 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-client-ca\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:04 crc kubenswrapper[4857]: I0318 14:06:04.957439 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdgx\" (UniqueName: \"kubernetes.io/projected/0d61789c-ee3d-4aff-99a1-592b91b773c6-kube-api-access-pbdgx\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.058343 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d61789c-ee3d-4aff-99a1-592b91b773c6-serving-cert\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.058410 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-client-ca\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.058445 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdgx\" (UniqueName: \"kubernetes.io/projected/0d61789c-ee3d-4aff-99a1-592b91b773c6-kube-api-access-pbdgx\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.058485 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-config\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.059571 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-client-ca\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.059720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d61789c-ee3d-4aff-99a1-592b91b773c6-config\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.078773 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d61789c-ee3d-4aff-99a1-592b91b773c6-serving-cert\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:05 crc kubenswrapper[4857]: I0318 14:06:05.087842 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdgx\" (UniqueName: \"kubernetes.io/projected/0d61789c-ee3d-4aff-99a1-592b91b773c6-kube-api-access-pbdgx\") pod \"route-controller-manager-6f7f765496-hksv2\" (UID: \"0d61789c-ee3d-4aff-99a1-592b91b773c6\") " pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.348448 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.351374 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.351652 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.351732 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.352401 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.381335 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.381413 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.409475 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerStarted","Data":"ecef915baadddc5638b2a49af94ce7de689e1de03537ff50f0d15736ee7ca79a"} Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.410463 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.416922 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.416985 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.436188 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podStartSLOduration=8.436163992000001 podStartE2EDuration="8.436163992s" podCreationTimestamp="2026-03-18 14:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:06:06.436067409 +0000 UTC m=+350.565195876" watchObservedRunningTime="2026-03-18 14:06:06.436163992 +0000 UTC m=+350.565292449" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.876021 4857 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.876850 4857 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877171 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30" gracePeriod=15 Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877323 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877675 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426" gracePeriod=15 Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877723 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b" gracePeriod=15 Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877810 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863" gracePeriod=15 Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.877851 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2" gracePeriod=15 Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879063 4857 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879568 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879589 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879607 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879616 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879628 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879635 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879643 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879649 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879659 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879665 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879675 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879681 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879692 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879699 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879713 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879720 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 18 14:06:06 crc kubenswrapper[4857]: E0318 14:06:06.879729 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879736 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879895 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879913 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879924 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879951 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879962 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879971 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.879982 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.880249 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 18 14:06:06 crc kubenswrapper[4857]: I0318 14:06:06.942140 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043239 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043384 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043770 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043851 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043917 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.043957 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.044012 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.149903 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150003 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150057 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150083 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150117 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150129 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150155 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150168 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150185 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150126 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150236 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150322 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150328 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150461 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.150595 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.169413 4857 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.169848 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.170254 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.236766 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:06:07 crc kubenswrapper[4857]: E0318 14:06:07.345996 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.89:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-lmqk2.189df4994231b68c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-lmqk2,UID:a7272920-8e13-4414-8a32-dfea84d2460f,APIVersion:v1,ResourceVersion:28563,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 6.635s (6.635s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,LastTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.473407 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.475069 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.476689 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426" exitCode=0 Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.476720 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b" exitCode=0 Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.476732 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863" exitCode=0 Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.476740 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2" exitCode=2 Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.476818 4857 scope.go:117] "RemoveContainer" containerID="f99ca4a7ca54dcee31dc9ed2ff6ff32958c2025d17c9dd9a3e8fc8d6db408cb8" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.496599 4857 generic.go:334] "Generic (PLEG): container finished" podID="629717da-142d-436b-bb10-642182966fd8" containerID="ffdf6d88caf10d6a54561c57816f1cdabb947464ff1075c00f34fb77d7b24ade" exitCode=0 Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.496654 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"629717da-142d-436b-bb10-642182966fd8","Type":"ContainerDied","Data":"ffdf6d88caf10d6a54561c57816f1cdabb947464ff1075c00f34fb77d7b24ade"} Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.499429 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.499697 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.500020 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.504608 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.505138 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.505346 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.506207 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.506398 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.670896 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q8pg8" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:07 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:07 crc kubenswrapper[4857]: > Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.682937 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:07 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:07 crc kubenswrapper[4857]: > Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.855924 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.855986 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:06:07 crc kubenswrapper[4857]: I0318 14:06:07.890977 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:07 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:07 crc kubenswrapper[4857]: > Mar 18 14:06:08 crc kubenswrapper[4857]: I0318 14:06:08.158002 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:06:08 crc kubenswrapper[4857]: I0318 14:06:08.158045 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:06:08 crc kubenswrapper[4857]: I0318 14:06:08.601709 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerStarted","Data":"aa2697a80a10a323dac7b8f2726805cafb7735429ae521dee61202d1304ca69d"} Mar 18 14:06:08 crc kubenswrapper[4857]: I0318 14:06:08.604958 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"02ed7d2ce5e481d474c3ca1463ff211cd4ee664a94262b25e5838e4b72e9564d"} Mar 18 14:06:08 crc kubenswrapper[4857]: I0318 14:06:08.604995 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"327777ccfd248db530b76ff19f52a4c5cb35c5cbb7727f4804510416f5fc73a6"} Mar 18 14:06:08 crc kubenswrapper[4857]: E0318 14:06:08.834025 4857 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 14:06:08 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c" Netns:"/var/run/netns/4df5dea1-02c9-4ad6-bf80-859de459dec1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:08 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:08 crc kubenswrapper[4857]: > Mar 18 14:06:08 crc kubenswrapper[4857]: E0318 14:06:08.834129 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 14:06:08 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c" Netns:"/var/run/netns/4df5dea1-02c9-4ad6-bf80-859de459dec1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:08 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:08 crc kubenswrapper[4857]: > pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:08 crc kubenswrapper[4857]: E0318 14:06:08.834169 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 14:06:08 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c" Netns:"/var/run/netns/4df5dea1-02c9-4ad6-bf80-859de459dec1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:08 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:08 crc kubenswrapper[4857]: > pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:08 crc kubenswrapper[4857]: E0318 14:06:08.834251 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager(0d61789c-ee3d-4aff-99a1-592b91b773c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager(0d61789c-ee3d-4aff-99a1-592b91b773c6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c\\\" Netns:\\\"/var/run/netns/4df5dea1-02c9-4ad6-bf80-859de459dec1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=7de19fb913a4fab61629a733150bbcb15ec7d8bb5b591203efd01e445fb8d38c;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s\\\": dial tcp 38.102.83.89:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.109361 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-l9sbh" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:09 crc kubenswrapper[4857]: > Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.311831 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.320138 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.399577 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2g48f" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:09 crc kubenswrapper[4857]: > Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.433453 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:06:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:06:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:06:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T14:06:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.434035 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.434392 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.434731 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.435018 4857 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: E0318 14:06:09.435113 4857 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.468792 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.470675 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.471439 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.471858 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.472245 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.472687 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.516296 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.517177 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.518413 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.518892 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.519178 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.519371 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.618400 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access\") pod \"629717da-142d-436b-bb10-642182966fd8\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.618506 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir\") pod \"629717da-142d-436b-bb10-642182966fd8\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.618531 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock\") pod \"629717da-142d-436b-bb10-642182966fd8\" (UID: \"629717da-142d-436b-bb10-642182966fd8\") " Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.618935 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock" (OuterVolumeSpecName: "var-lock") pod "629717da-142d-436b-bb10-642182966fd8" (UID: "629717da-142d-436b-bb10-642182966fd8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.618977 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "629717da-142d-436b-bb10-642182966fd8" (UID: "629717da-142d-436b-bb10-642182966fd8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.620832 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"629717da-142d-436b-bb10-642182966fd8","Type":"ContainerDied","Data":"d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498"} Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.620876 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d87e1101b84618c1e99964d832dfcece87868e03ca70a5d14586ba0c86ab4498" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.620947 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.624238 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.625275 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.626075 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.626097 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.626791 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.627266 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.627599 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.627897 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.629328 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.629611 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.631337 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "629717da-142d-436b-bb10-642182966fd8" (UID: "629717da-142d-436b-bb10-642182966fd8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.631552 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.632083 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.632312 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.632520 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.674721 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.676478 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.677064 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.677446 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.677829 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.678151 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.678464 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.720496 4857 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.720540 4857 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/629717da-142d-436b-bb10-642182966fd8-var-lock\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.720552 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/629717da-142d-436b-bb10-642182966fd8-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.968259 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.968460 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.968638 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.968837 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.969028 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:09 crc kubenswrapper[4857]: I0318 14:06:09.969205 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: E0318 14:06:10.370473 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.89:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-lmqk2.189df4994231b68c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-lmqk2,UID:a7272920-8e13-4414-8a32-dfea84d2460f,APIVersion:v1,ResourceVersion:28563,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 6.635s (6.635s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,LastTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.375218 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.376243 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.376728 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.377152 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.377502 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.377804 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.378103 4857 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.378397 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.378620 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.531291 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.531780 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.531818 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.531491 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.532260 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.532331 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.633243 4857 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.633289 4857 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.633304 4857 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.636647 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.637506 4857 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30" exitCode=0 Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.637651 4857 scope.go:117] "RemoveContainer" containerID="66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.637778 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.654504 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.655141 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.655543 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.655924 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.656247 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.656556 4857 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:10 crc kubenswrapper[4857]: I0318 14:06:10.656975 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.171492 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.384065 4857 scope.go:117] "RemoveContainer" containerID="440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.479866 4857 scope.go:117] "RemoveContainer" containerID="39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.646012 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.841367 4857 scope.go:117] "RemoveContainer" containerID="7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.920216 4857 scope.go:117] "RemoveContainer" containerID="05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30" Mar 18 14:06:11 crc kubenswrapper[4857]: I0318 14:06:11.985867 4857 scope.go:117] "RemoveContainer" containerID="5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.050576 4857 scope.go:117] "RemoveContainer" containerID="66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.052071 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\": container with ID starting with 66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426 not found: ID does not exist" containerID="66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.052109 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426"} err="failed to get container status \"66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\": rpc error: code = NotFound desc = could not find container \"66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426\": container with ID starting with 66887d4e4a1235f7cf3ba0eff1948fa0a5445ae46680356d09bb87f133d95426 not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.052139 4857 scope.go:117] "RemoveContainer" containerID="440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.052621 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\": container with ID starting with 440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b not found: ID does not exist" containerID="440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.052659 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b"} err="failed to get container status \"440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\": rpc error: code = NotFound desc = could not find container \"440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b\": container with ID starting with 440d1c43fe28b8f7d2003c22ef64c20f746ed3a60d7dd1eafe24ae3a279ceb6b not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.052699 4857 scope.go:117] "RemoveContainer" containerID="39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.053046 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\": container with ID starting with 39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863 not found: ID does not exist" containerID="39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.053086 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863"} err="failed to get container status \"39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\": rpc error: code = NotFound desc = could not find container \"39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863\": container with ID starting with 39852af4057d6b575184b96744c85116debb162238e8ff2539044537876c7863 not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.053103 4857 scope.go:117] "RemoveContainer" containerID="7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.053732 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\": container with ID starting with 7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2 not found: ID does not exist" containerID="7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.053833 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2"} err="failed to get container status \"7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\": rpc error: code = NotFound desc = could not find container \"7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2\": container with ID starting with 7f240485a088ecc331ed2d189bc53c178d85d0709a0f96c6a9f0d793f7c126b2 not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.053891 4857 scope.go:117] "RemoveContainer" containerID="05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.054351 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\": container with ID starting with 05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30 not found: ID does not exist" containerID="05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.054384 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30"} err="failed to get container status \"05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\": rpc error: code = NotFound desc = could not find container \"05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30\": container with ID starting with 05a7f348a61b07922ce272630d6553fae5b124ad81e1c62c9ec4f68524509b30 not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.054413 4857 scope.go:117] "RemoveContainer" containerID="5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.054692 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\": container with ID starting with 5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887 not found: ID does not exist" containerID="5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.054720 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887"} err="failed to get container status \"5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\": rpc error: code = NotFound desc = could not find container \"5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887\": container with ID starting with 5cebd682661a090181bb2e4c55b00a78b48af48a964295e9cb0bc0bd72c29887 not found: ID does not exist" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.459713 4857 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 14:06:12 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022" Netns:"/var/run/netns/da79b734-c0f6-4b48-adf8-b95c4b73e786" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:12 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:12 crc kubenswrapper[4857]: > Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.460090 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 14:06:12 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022" Netns:"/var/run/netns/da79b734-c0f6-4b48-adf8-b95c4b73e786" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:12 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:12 crc kubenswrapper[4857]: > pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.460116 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 14:06:12 crc kubenswrapper[4857]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022" Netns:"/var/run/netns/da79b734-c0f6-4b48-adf8-b95c4b73e786" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s": dial tcp 38.102.83.89:6443: connect: connection refused Mar 18 14:06:12 crc kubenswrapper[4857]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 14:06:12 crc kubenswrapper[4857]: > pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.460204 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager(0d61789c-ee3d-4aff-99a1-592b91b773c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager(0d61789c-ee3d-4aff-99a1-592b91b773c6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6f7f765496-hksv2_openshift-route-controller-manager_0d61789c-ee3d-4aff-99a1-592b91b773c6_0(192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022): error adding pod openshift-route-controller-manager_route-controller-manager-6f7f765496-hksv2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022\\\" Netns:\\\"/var/run/netns/da79b734-c0f6-4b48-adf8-b95c4b73e786\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6f7f765496-hksv2;K8S_POD_INFRA_CONTAINER_ID=192929f109f355b2c663bfe916a92c1e0543afb288e6250c906eaa63ac4b0022;K8S_POD_UID=0d61789c-ee3d-4aff-99a1-592b91b773c6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2/0d61789c-ee3d-4aff-99a1-592b91b773c6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-6f7f765496-hksv2 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6f7f765496-hksv2?timeout=1m0s\\\": dial tcp 38.102.83.89:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.504358 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.504421 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.646354 4857 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.647203 4857 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.647671 4857 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.648050 4857 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.648442 4857 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.648505 4857 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.648963 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="200ms" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.657548 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerStarted","Data":"b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261"} Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.658402 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.658647 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.658912 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.659151 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.659389 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.659627 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.660565 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.662260 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-c867bfcc4-nc2bq_3d4741b7-1f3f-405d-b675-d0141044421a/controller-manager/0.log" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.662307 4857 generic.go:334] "Generic (PLEG): container finished" podID="3d4741b7-1f3f-405d-b675-d0141044421a" containerID="ecef915baadddc5638b2a49af94ce7de689e1de03537ff50f0d15736ee7ca79a" exitCode=255 Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.662357 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerDied","Data":"ecef915baadddc5638b2a49af94ce7de689e1de03537ff50f0d15736ee7ca79a"} Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.663276 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.663512 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.663816 4857 scope.go:117] "RemoveContainer" containerID="ecef915baadddc5638b2a49af94ce7de689e1de03537ff50f0d15736ee7ca79a" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.663972 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.664545 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.664656 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564046-slwnd" event={"ID":"ca33260b-e859-4d77-9509-3e08e46be7f1","Type":"ContainerStarted","Data":"af3077fed793ad6237eaf1924a36c77deb405e3ee3a5507a654bbaac519b4050"} Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.664903 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.665285 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.665606 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: E0318 14:06:12.849352 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="400ms" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.953241 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.953316 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.953250 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 18 14:06:12 crc kubenswrapper[4857]: I0318 14:06:12.953409 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: E0318 14:06:13.250439 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="800ms" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.680121 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-c867bfcc4-nc2bq_3d4741b7-1f3f-405d-b675-d0141044421a/controller-manager/0.log" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.680341 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerStarted","Data":"153d5363065a7645ab084bbd0be5c6de25c6aa2c2b518991bdeb1ea84bd0509d"} Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.681040 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.681670 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.682329 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.682384 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564046-slwnd" event={"ID":"ca33260b-e859-4d77-9509-3e08e46be7f1","Type":"ContainerDied","Data":"af3077fed793ad6237eaf1924a36c77deb405e3ee3a5507a654bbaac519b4050"} Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.682357 4857 generic.go:334] "Generic (PLEG): container finished" podID="ca33260b-e859-4d77-9509-3e08e46be7f1" containerID="af3077fed793ad6237eaf1924a36c77deb405e3ee3a5507a654bbaac519b4050" exitCode=0 Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.682894 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.683517 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.683917 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.684196 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.684518 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.684999 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.685282 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.685623 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.686101 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.686370 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.686738 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.687169 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.687624 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.688104 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.688428 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.688981 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.689361 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.689646 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.690008 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:13 crc kubenswrapper[4857]: I0318 14:06:13.690282 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:14 crc kubenswrapper[4857]: E0318 14:06:14.051481 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="1.6s" Mar 18 14:06:14 crc kubenswrapper[4857]: I0318 14:06:14.998705 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:14 crc kubenswrapper[4857]: I0318 14:06:14.999809 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.000282 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.000794 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.001108 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.001579 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.002084 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.002410 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.132322 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxfng\" (UniqueName: \"kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng\") pod \"ca33260b-e859-4d77-9509-3e08e46be7f1\" (UID: \"ca33260b-e859-4d77-9509-3e08e46be7f1\") " Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.141315 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng" (OuterVolumeSpecName: "kube-api-access-fxfng") pod "ca33260b-e859-4d77-9509-3e08e46be7f1" (UID: "ca33260b-e859-4d77-9509-3e08e46be7f1"). InnerVolumeSpecName "kube-api-access-fxfng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.234216 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxfng\" (UniqueName: \"kubernetes.io/projected/ca33260b-e859-4d77-9509-3e08e46be7f1-kube-api-access-fxfng\") on node \"crc\" DevicePath \"\"" Mar 18 14:06:15 crc kubenswrapper[4857]: E0318 14:06:15.652460 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="3.2s" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.701442 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564046-slwnd" event={"ID":"ca33260b-e859-4d77-9509-3e08e46be7f1","Type":"ContainerDied","Data":"fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482"} Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.701544 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564046-slwnd" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.701621 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdf126865567f92d671c5d13417c866824c211906e2775d366b849a999c2482" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.706379 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.706855 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.707335 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.707679 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.707987 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.708269 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:15 crc kubenswrapper[4857]: I0318 14:06:15.708620 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.357433 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.358452 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.358948 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.359351 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.359941 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.360290 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.360513 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.360720 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.361121 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.389029 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.389819 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.390500 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.391188 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.391708 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.392115 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.392415 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.392714 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.393067 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.393318 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.400317 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.401194 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.401605 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.402024 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.402385 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.402714 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.403140 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.403412 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.403767 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.404142 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.433245 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.436901 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.437299 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.437664 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.440664 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.441169 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.441644 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.442047 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.442374 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.442729 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.443259 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.444207 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.444713 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.444974 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.445312 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.445939 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.446240 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.446651 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.447093 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.447365 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.447585 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.447821 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.474969 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.475802 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.476201 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.476602 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.476921 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.477241 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.477497 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.477787 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.478101 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.478369 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.478660 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.528089 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.528150 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.567383 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.568311 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.568709 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.569432 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.569898 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.570117 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.570361 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.570553 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.570737 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.571129 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:16 crc kubenswrapper[4857]: I0318 14:06:16.571573 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.001589 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.003353 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.004001 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.004349 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.004678 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.005203 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.005461 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.005804 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.006885 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.007460 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.007814 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.008254 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.548314 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.548561 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.548735 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.549041 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.549318 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.551809 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.552407 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.556895 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.557355 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.557629 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.558364 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.920650 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.922056 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.922504 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.922814 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.923306 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.924002 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.924440 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.924990 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.925437 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.925779 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.926111 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.926884 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.927182 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.960050 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.961426 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.961844 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.962310 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.962669 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.963076 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.963395 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.963663 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.963997 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.964297 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.964576 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.964961 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:17 crc kubenswrapper[4857]: I0318 14:06:17.965260 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.198643 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.199673 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.200036 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.200324 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.200732 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.201391 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.201898 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.202197 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.202635 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.203062 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.203494 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.204287 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.204692 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.205193 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.235780 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.236440 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.236947 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.237459 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.237721 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.238014 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.238364 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.238857 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.239174 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.239569 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.239881 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.240176 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.240441 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: I0318 14:06:18.240704 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:18 crc kubenswrapper[4857]: E0318 14:06:18.852947 4857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.89:6443: connect: connection refused" interval="6.4s" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.423427 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.590799 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.633245 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.634086 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.634486 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.634775 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.635042 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.635334 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.635566 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.635824 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.636081 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.636372 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.636656 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.637048 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.637359 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.637800 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.766505 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.767362 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.767801 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.768240 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.768544 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.768849 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.769171 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.769524 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.769842 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.770159 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.770419 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.770708 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.771033 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:19 crc kubenswrapper[4857]: I0318 14:06:19.771287 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:20 crc kubenswrapper[4857]: E0318 14:06:20.372351 4857 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.89:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-lmqk2.189df4994231b68c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-lmqk2,UID:a7272920-8e13-4414-8a32-dfea84d2460f,APIVersion:v1,ResourceVersion:28563,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 6.635s (6.635s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,LastTimestamp:2026-03-18 14:06:07.33157134 +0000 UTC m=+351.460699797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.098283 4857 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.098730 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.163168 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.164401 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.165090 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.165636 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.166031 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.166443 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.166817 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.167369 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.167903 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.168201 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.168605 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.168911 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.169294 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.169838 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.179868 4857 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.179931 4857 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:22 crc kubenswrapper[4857]: E0318 14:06:22.180524 4857 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.181343 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:22 crc kubenswrapper[4857]: W0318 14:06:22.217870 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9233bf0cf89afe341c62d6bf6fd02bd016120dfc3551b5fdb3c221c92ac6ab12 WatchSource:0}: Error finding container 9233bf0cf89afe341c62d6bf6fd02bd016120dfc3551b5fdb3c221c92ac6ab12: Status 404 returned error can't find the container with id 9233bf0cf89afe341c62d6bf6fd02bd016120dfc3551b5fdb3c221c92ac6ab12 Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.751554 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.753145 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.753276 4857 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3" exitCode=1 Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.753390 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3"} Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.756312 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.756855 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.757051 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5f9bf5dadbdd1ea1308bd9273e04224c59fb5b2f95de8323fa5acead500e2730"} Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.757254 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9233bf0cf89afe341c62d6bf6fd02bd016120dfc3551b5fdb3c221c92ac6ab12"} Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.757306 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.757518 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.757732 4857 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.758037 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.758283 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.759110 4857 scope.go:117] "RemoveContainer" containerID="40e105b328fca333c0f11eba9ed5505ec0046a53aa99fab49986248ff75ddbb3" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.759795 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.760829 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.761679 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.762187 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.762743 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.763034 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.763223 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.766612 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.976359 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gvkpz" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.977083 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.977350 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.977551 4857 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.977743 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.978049 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.978484 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.979246 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.979517 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.979777 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.980053 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.980308 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.980566 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.980835 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.981083 4857 status_manager.go:851] "Failed to get status for pod" podUID="ef638f17-5999-467e-b170-8ef20068e451" pod="openshift-console/downloads-7954f5f757-gvkpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-gvkpz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:22 crc kubenswrapper[4857]: I0318 14:06:22.981386 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.770110 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.771465 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.771545 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2dc0f3f2f31706ae5e0946a045de7d6f4f27ae0bf588413b8ceca3f448afc63b"} Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.774203 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.774543 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.776610 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.776916 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777245 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777462 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777540 4857 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5f9bf5dadbdd1ea1308bd9273e04224c59fb5b2f95de8323fa5acead500e2730" exitCode=0 Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777592 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5f9bf5dadbdd1ea1308bd9273e04224c59fb5b2f95de8323fa5acead500e2730"} Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777646 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.777874 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.778048 4857 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.778076 4857 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.778247 4857 status_manager.go:851] "Failed to get status for pod" podUID="ef638f17-5999-467e-b170-8ef20068e451" pod="openshift-console/downloads-7954f5f757-gvkpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-gvkpz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: E0318 14:06:23.778381 4857 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.778646 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.778853 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.779113 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.779364 4857 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.779700 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.780120 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.780418 4857 status_manager.go:851] "Failed to get status for pod" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" pod="openshift-marketplace/redhat-marketplace-l9sbh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l9sbh\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.780639 4857 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.780903 4857 status_manager.go:851] "Failed to get status for pod" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" pod="openshift-infra/auto-csr-approver-29564046-slwnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29564046-slwnd\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.785253 4857 status_manager.go:851] "Failed to get status for pod" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" pod="openshift-marketplace/certified-operators-hzfl4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-hzfl4\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.785773 4857 status_manager.go:851] "Failed to get status for pod" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-c867bfcc4-nc2bq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.786248 4857 status_manager.go:851] "Failed to get status for pod" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" pod="openshift-marketplace/redhat-operators-lmqk2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lmqk2\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.786510 4857 status_manager.go:851] "Failed to get status for pod" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" pod="openshift-marketplace/community-operators-dz4vq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dz4vq\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.786746 4857 status_manager.go:851] "Failed to get status for pod" podUID="ef638f17-5999-467e-b170-8ef20068e451" pod="openshift-console/downloads-7954f5f757-gvkpz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-gvkpz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.787180 4857 status_manager.go:851] "Failed to get status for pod" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" pod="openshift-marketplace/redhat-operators-2g48f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2g48f\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.787690 4857 status_manager.go:851] "Failed to get status for pod" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" pod="openshift-marketplace/certified-operators-q8pg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q8pg8\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.788053 4857 status_manager.go:851] "Failed to get status for pod" podUID="629717da-142d-436b-bb10-642182966fd8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.788349 4857 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.788721 4857 status_manager.go:851] "Failed to get status for pod" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" pod="openshift-marketplace/redhat-marketplace-c89xj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c89xj\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.789209 4857 status_manager.go:851] "Failed to get status for pod" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" pod="openshift-marketplace/community-operators-cghkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cghkz\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.789513 4857 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.89:6443: connect: connection refused" Mar 18 14:06:23 crc kubenswrapper[4857]: I0318 14:06:23.887529 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:06:24 crc kubenswrapper[4857]: I0318 14:06:24.803114 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5556f8d197f40b276577702551f6857338f1ce399177f9a0173d164d18d74978"} Mar 18 14:06:24 crc kubenswrapper[4857]: I0318 14:06:24.803203 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0a4b5734cc4e48cfd20a10a4b32c6d1b424b84b560e0e61bf96f09698aa79b98"} Mar 18 14:06:25 crc kubenswrapper[4857]: I0318 14:06:25.814456 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4e1c554b52d59a1f8da95d0f57eb97bf94b6bfccf68e8866da10c8f8d876d13f"} Mar 18 14:06:25 crc kubenswrapper[4857]: I0318 14:06:25.814846 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5bb87171b8dd6e3e53fb81d897afd2defc631dd883b1ddda2bc35f3c6c7e9fa"} Mar 18 14:06:25 crc kubenswrapper[4857]: I0318 14:06:25.814865 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cab8d35da77498c5df7c54f11e85e67746aed317af0b2e1ee6f3963ada30cee1"} Mar 18 14:06:25 crc kubenswrapper[4857]: I0318 14:06:25.814911 4857 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:25 crc kubenswrapper[4857]: I0318 14:06:25.814944 4857 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:26 crc kubenswrapper[4857]: I0318 14:06:26.694056 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.163194 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.164235 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.190197 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.190271 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.195511 4857 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]log ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]etcd ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-filter ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-apiextensions-informers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-apiextensions-controllers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/crd-informer-synced ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-system-namespaces-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 18 14:06:27 crc kubenswrapper[4857]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/bootstrap-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/start-kube-aggregator-informers ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-registration-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-discovery-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]autoregister-completion ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-openapi-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 18 14:06:27 crc kubenswrapper[4857]: livez check failed Mar 18 14:06:27 crc kubenswrapper[4857]: I0318 14:06:27.195695 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:06:31 crc kubenswrapper[4857]: I0318 14:06:31.364221 4857 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:31 crc kubenswrapper[4857]: I0318 14:06:31.783376 4857 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2532f285-411c-434e-abab-93e74ab0f6d4" Mar 18 14:06:31 crc kubenswrapper[4857]: I0318 14:06:31.859096 4857 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:31 crc kubenswrapper[4857]: I0318 14:06:31.859404 4857 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="27f3c481-ef1a-4bf7-b415-fd8d017f98d7" Mar 18 14:06:31 crc kubenswrapper[4857]: I0318 14:06:31.907118 4857 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2532f285-411c-434e-abab-93e74ab0f6d4" Mar 18 14:06:32 crc kubenswrapper[4857]: I0318 14:06:32.767503 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:06:32 crc kubenswrapper[4857]: I0318 14:06:32.778088 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:06:33 crc kubenswrapper[4857]: I0318 14:06:33.013430 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 18 14:06:34 crc kubenswrapper[4857]: I0318 14:06:34.020566 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" event={"ID":"0d61789c-ee3d-4aff-99a1-592b91b773c6","Type":"ContainerStarted","Data":"eba094d46af9955bc10193bf7231c612214121ca94aacd8cc8da5278fb3dde94"} Mar 18 14:06:34 crc kubenswrapper[4857]: I0318 14:06:34.020696 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" event={"ID":"0d61789c-ee3d-4aff-99a1-592b91b773c6","Type":"ContainerStarted","Data":"190ec87f4bbda1284d4aca261d21ee969dd7a9171fc5725f2ce015f30b7930c4"} Mar 18 14:06:34 crc kubenswrapper[4857]: I0318 14:06:34.021137 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:06:35 crc kubenswrapper[4857]: I0318 14:06:35.021533 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:06:35 crc kubenswrapper[4857]: I0318 14:06:35.021689 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:06:36 crc kubenswrapper[4857]: I0318 14:06:36.026122 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:06:36 crc kubenswrapper[4857]: I0318 14:06:36.026208 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:06:37 crc kubenswrapper[4857]: I0318 14:06:37.350172 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:06:37 crc kubenswrapper[4857]: I0318 14:06:37.350723 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:06:41 crc kubenswrapper[4857]: I0318 14:06:41.368203 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 14:06:41 crc kubenswrapper[4857]: I0318 14:06:41.789160 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.135607 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.218551 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.335348 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.478676 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.674803 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 18 14:06:42 crc kubenswrapper[4857]: I0318 14:06:42.850468 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.043325 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.314941 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.392827 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.423030 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.628313 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.629000 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.629692 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.630570 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.636892 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.966404 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 18 14:06:43 crc kubenswrapper[4857]: I0318 14:06:43.969381 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.040162 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.151984 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.357410 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.374353 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.556022 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.763147 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.790587 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.822681 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 14:06:44 crc kubenswrapper[4857]: I0318 14:06:44.976649 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.007713 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.138344 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.187230 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.232969 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.250559 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.286785 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.318568 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.354164 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.355263 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.379224 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.456194 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.655000 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.657941 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.662081 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.668196 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.716259 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.729629 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.854457 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.933509 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 18 14:06:45 crc kubenswrapper[4857]: I0318 14:06:45.971980 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 14:06:46 crc kubenswrapper[4857]: I0318 14:06:46.036422 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 14:06:46 crc kubenswrapper[4857]: I0318 14:06:46.126356 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 18 14:06:46 crc kubenswrapper[4857]: I0318 14:06:46.215322 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 14:06:46 crc kubenswrapper[4857]: I0318 14:06:46.220417 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 14:06:46 crc kubenswrapper[4857]: I0318 14:06:46.304122 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:46.512725 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.058065 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.058480 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.059838 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.059994 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.060135 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.060223 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.062102 4857 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.138349 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.149691 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.427254 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.427737 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.431213 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.441605 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.461853 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.461965 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.539643 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.593147 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.744190 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.773087 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.876523 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.975105 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 14:06:47 crc kubenswrapper[4857]: I0318 14:06:47.995599 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.100906 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.361167 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.361968 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.371187 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.495004 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.500347 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.543736 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.579786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.594355 4857 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.596107 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podStartSLOduration=50.59606379 podStartE2EDuration="50.59606379s" podCreationTimestamp="2026-03-18 14:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:06:34.03990288 +0000 UTC m=+378.169031337" watchObservedRunningTime="2026-03-18 14:06:48.59606379 +0000 UTC m=+392.725192247" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.597851 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmqk2" podStartSLOduration=48.250573006 podStartE2EDuration="2m41.597843165s" podCreationTimestamp="2026-03-18 14:04:07 +0000 UTC" firstStartedPulling="2026-03-18 14:04:13.98426586 +0000 UTC m=+238.113394327" lastFinishedPulling="2026-03-18 14:06:07.331536029 +0000 UTC m=+351.460664486" observedRunningTime="2026-03-18 14:06:31.561395421 +0000 UTC m=+375.690523888" watchObservedRunningTime="2026-03-18 14:06:48.597843165 +0000 UTC m=+392.726971622" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.598528 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.598522873 podStartE2EDuration="42.598522873s" podCreationTimestamp="2026-03-18 14:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:06:31.491387903 +0000 UTC m=+375.620516360" watchObservedRunningTime="2026-03-18 14:06:48.598522873 +0000 UTC m=+392.727651330" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.598950 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dz4vq" podStartSLOduration=47.196128516 podStartE2EDuration="2m44.598945653s" podCreationTimestamp="2026-03-18 14:04:04 +0000 UTC" firstStartedPulling="2026-03-18 14:04:13.981195126 +0000 UTC m=+238.110323593" lastFinishedPulling="2026-03-18 14:06:11.384012273 +0000 UTC m=+355.513140730" observedRunningTime="2026-03-18 14:06:31.648078034 +0000 UTC m=+375.777206481" watchObservedRunningTime="2026-03-18 14:06:48.598945653 +0000 UTC m=+392.728074110" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.599891 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.599951 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.599975 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2"] Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.600685 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.613814 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.831899 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.834804 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.835835 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.836182 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.839460 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.839633 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.892112 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.892822 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.892788023 podStartE2EDuration="17.892788023s" podCreationTimestamp="2026-03-18 14:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:06:48.858396755 +0000 UTC m=+392.987525212" watchObservedRunningTime="2026-03-18 14:06:48.892788023 +0000 UTC m=+393.021916480" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.908319 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.927505 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.958025 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.961231 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 14:06:48 crc kubenswrapper[4857]: I0318 14:06:48.981939 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.012177 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.084517 4857 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.898552 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.902954 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.926811 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.927970 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.928185 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.928393 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.928439 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.928546 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.928880 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.929195 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.929414 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.995981 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.998467 4857 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 14:06:49 crc kubenswrapper[4857]: I0318 14:06:49.998832 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.029351 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.031405 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.132874 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.153518 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.889231 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.891068 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.905964 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.907891 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.926640 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.927203 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.927333 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.927238 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.927624 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.927925 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 14:06:50 crc kubenswrapper[4857]: I0318 14:06:50.937252 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.963875 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.969617 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.970200 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.970371 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.970682 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.973235 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.973262 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.973443 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.980387 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.980588 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.980719 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981035 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981163 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981299 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981443 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981590 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.981714 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.983813 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.990297 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 14:06:51 crc kubenswrapper[4857]: I0318 14:06:51.991305 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.022251 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.026220 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.035806 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-c89xj" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:52 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:52 crc kubenswrapper[4857]: > Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.121870 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.164259 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.174309 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.189417 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.201692 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.261253 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.295165 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.393350 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 14:06:52 crc kubenswrapper[4857]: I0318 14:06:52.456644 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.410060 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.410468 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.410625 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.410972 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411183 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411280 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411420 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411580 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411818 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.411991 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.412068 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.412133 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.412283 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.412352 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.412417 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.413681 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.429727 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.443042 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.575317 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.692854 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.696590 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.728067 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.728707 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.759982 4857 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.760322 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://02ed7d2ce5e481d474c3ca1463ff211cd4ee664a94262b25e5838e4b72e9564d" gracePeriod=5 Mar 18 14:06:53 crc kubenswrapper[4857]: I0318 14:06:53.917055 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.449539 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.455117 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.455489 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.455799 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.465485 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.465675 4857 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.465902 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.466116 4857 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.475310 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.493831 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.580050 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.634199 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 14:06:54 crc kubenswrapper[4857]: I0318 14:06:54.802204 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.832655 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.842658 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.842936 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.843108 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.853187 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.862428 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.869746 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.869770 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.869842 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.869988 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.870109 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.870336 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.870363 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 14:06:55 crc kubenswrapper[4857]: I0318 14:06:55.870524 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.772444 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.772688 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.773838 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.774339 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.774492 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.774632 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.774791 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 14:06:56 crc kubenswrapper[4857]: I0318 14:06:56.774921 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.703610 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.704192 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.704074 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.704382 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.704581 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.704797 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.706588 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 14:06:57 crc kubenswrapper[4857]: I0318 14:06:57.864351 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 14:06:59 crc kubenswrapper[4857]: I0318 14:06:59.019333 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 14:06:59 crc kubenswrapper[4857]: I0318 14:06:59.052875 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:59 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:59 crc kubenswrapper[4857]: > Mar 18 14:06:59 crc kubenswrapper[4857]: I0318 14:06:59.075440 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" probeResult="failure" output=< Mar 18 14:06:59 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:06:59 crc kubenswrapper[4857]: > Mar 18 14:06:59 crc kubenswrapper[4857]: E0318 14:06:59.446355 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.794s" Mar 18 14:06:59 crc kubenswrapper[4857]: I0318 14:06:59.482288 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.394209 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.394344 4857 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="02ed7d2ce5e481d474c3ca1463ff211cd4ee664a94262b25e5838e4b72e9564d" exitCode=137 Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.555930 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.556333 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.667901 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668054 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668110 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668156 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668191 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668317 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668430 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668471 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668664 4857 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668687 4857 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668697 4857 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.668709 4857 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.684941 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:07:00 crc kubenswrapper[4857]: I0318 14:07:00.815772 4857 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.174120 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.174633 4857 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.194336 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.194413 4857 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="66bb1c18-b7f2-44bf-9bbb-6ca3db646752" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.199258 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.199349 4857 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="66bb1c18-b7f2-44bf-9bbb-6ca3db646752" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.402608 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.402828 4857 scope.go:117] "RemoveContainer" containerID="02ed7d2ce5e481d474c3ca1463ff211cd4ee664a94262b25e5838e4b72e9564d" Mar 18 14:07:01 crc kubenswrapper[4857]: I0318 14:07:01.402922 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 18 14:07:03 crc kubenswrapper[4857]: I0318 14:07:03.725163 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 18 14:07:09 crc kubenswrapper[4857]: I0318 14:07:09.003425 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 14:07:10 crc kubenswrapper[4857]: I0318 14:07:10.217942 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 14:07:11 crc kubenswrapper[4857]: I0318 14:07:11.589156 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 14:07:11 crc kubenswrapper[4857]: I0318 14:07:11.651508 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 18 14:07:12 crc kubenswrapper[4857]: I0318 14:07:12.193083 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 14:07:12 crc kubenswrapper[4857]: I0318 14:07:12.194159 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 18 14:07:13 crc kubenswrapper[4857]: I0318 14:07:13.171363 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.335536 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.336553 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" containerID="cri-o://1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.363250 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.364364 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q8pg8" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" containerID="cri-o://9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.378819 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.379240 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" containerID="cri-o://ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.397718 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.398105 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="registry-server" containerID="cri-o://b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.413731 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.414154 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" containerID="cri-o://b6a745640825244382102719f62339e633eb094ae46221f41cd6ca61a83ede65" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.418696 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.419603 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c89xj" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="registry-server" containerID="cri-o://f810ec1ba2d6d7aa7a6c3de2f8d60f311c51ed09a1b1feea921bce8272e07623" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.430034 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.430474 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l9sbh" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="registry-server" containerID="cri-o://87c3ff8cbfd888dbd57bc57b8da772bd5ff6cc39f6d4d059acc010408f91feca" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.434346 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.434735 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2g48f" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="registry-server" containerID="cri-o://a6ea4af12158e67b8c6b7d32cff44f35f03dcd46f7af116e30ab61bd92f7596c" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.449902 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vc2t4"] Mar 18 14:07:14 crc kubenswrapper[4857]: E0318 14:07:14.450305 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450324 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 18 14:07:14 crc kubenswrapper[4857]: E0318 14:07:14.450341 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" containerName="oc" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450347 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" containerName="oc" Mar 18 14:07:14 crc kubenswrapper[4857]: E0318 14:07:14.450386 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629717da-142d-436b-bb10-642182966fd8" containerName="installer" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450393 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="629717da-142d-436b-bb10-642182966fd8" containerName="installer" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450556 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450571 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="629717da-142d-436b-bb10-642182966fd8" containerName="installer" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.450582 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" containerName="oc" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.451319 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.458337 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.458766 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmqk2" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="registry-server" containerID="cri-o://aa2697a80a10a323dac7b8f2726805cafb7735429ae521dee61202d1304ca69d" gracePeriod=30 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.464714 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vc2t4"] Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.623373 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsjq\" (UniqueName: \"kubernetes.io/projected/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-kube-api-access-lfsjq\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.623443 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.623483 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.724779 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.724855 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.724935 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsjq\" (UniqueName: \"kubernetes.io/projected/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-kube-api-access-lfsjq\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.726429 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.731698 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.741622 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsjq\" (UniqueName: \"kubernetes.io/projected/b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c-kube-api-access-lfsjq\") pod \"marketplace-operator-79b997595-vc2t4\" (UID: \"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c\") " pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.877376 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.913382 4857 generic.go:334] "Generic (PLEG): container finished" podID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerID="1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" exitCode=0 Mar 18 14:07:14 crc kubenswrapper[4857]: I0318 14:07:14.913442 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerDied","Data":"1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.078098 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.329477 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.433925 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vc2t4"] Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.894845 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.928615 4857 generic.go:334] "Generic (PLEG): container finished" podID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerID="9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.928718 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerDied","Data":"9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.930438 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" event={"ID":"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c","Type":"ContainerStarted","Data":"30f278f6a78a7e338ae83e11097cf8a2cf9f3a6f6bcb625232c6379558bc825e"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.936118 4857 generic.go:334] "Generic (PLEG): container finished" podID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerID="a6ea4af12158e67b8c6b7d32cff44f35f03dcd46f7af116e30ab61bd92f7596c" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.936206 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerDied","Data":"a6ea4af12158e67b8c6b7d32cff44f35f03dcd46f7af116e30ab61bd92f7596c"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.942003 4857 generic.go:334] "Generic (PLEG): container finished" podID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerID="ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.942505 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerDied","Data":"ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.952220 4857 generic.go:334] "Generic (PLEG): container finished" podID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerID="b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.952290 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerDied","Data":"b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.959470 4857 generic.go:334] "Generic (PLEG): container finished" podID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerID="87c3ff8cbfd888dbd57bc57b8da772bd5ff6cc39f6d4d059acc010408f91feca" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.959560 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerDied","Data":"87c3ff8cbfd888dbd57bc57b8da772bd5ff6cc39f6d4d059acc010408f91feca"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.966845 4857 generic.go:334] "Generic (PLEG): container finished" podID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerID="f810ec1ba2d6d7aa7a6c3de2f8d60f311c51ed09a1b1feea921bce8272e07623" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.967084 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerDied","Data":"f810ec1ba2d6d7aa7a6c3de2f8d60f311c51ed09a1b1feea921bce8272e07623"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.970343 4857 generic.go:334] "Generic (PLEG): container finished" podID="a7272920-8e13-4414-8a32-dfea84d2460f" containerID="aa2697a80a10a323dac7b8f2726805cafb7735429ae521dee61202d1304ca69d" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.970420 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerDied","Data":"aa2697a80a10a323dac7b8f2726805cafb7735429ae521dee61202d1304ca69d"} Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.988032 4857 generic.go:334] "Generic (PLEG): container finished" podID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerID="b6a745640825244382102719f62339e633eb094ae46221f41cd6ca61a83ede65" exitCode=0 Mar 18 14:07:15 crc kubenswrapper[4857]: I0318 14:07:15.988246 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" event={"ID":"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e","Type":"ContainerDied","Data":"b6a745640825244382102719f62339e633eb094ae46221f41cd6ca61a83ede65"} Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.316328 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81 is running failed: container process not found" containerID="ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.316874 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81 is running failed: container process not found" containerID="ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.317153 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81 is running failed: container process not found" containerID="ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.317195 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-cghkz" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.351190 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9 is running failed: container process not found" containerID="9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.351705 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9 is running failed: container process not found" containerID="9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.352163 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9 is running failed: container process not found" containerID="9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.352200 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-q8pg8" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.382408 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8 is running failed: container process not found" containerID="1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.382833 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8 is running failed: container process not found" containerID="1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.383430 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8 is running failed: container process not found" containerID="1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.383481 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-hzfl4" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.443037 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.530543 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261 is running failed: container process not found" containerID="b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.530896 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261 is running failed: container process not found" containerID="b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.531219 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261 is running failed: container process not found" containerID="b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:07:16 crc kubenswrapper[4857]: E0318 14:07:16.531310 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-dz4vq" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="registry-server" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.552998 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rm8p\" (UniqueName: \"kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p\") pod \"a7272920-8e13-4414-8a32-dfea84d2460f\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.553125 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content\") pod \"a7272920-8e13-4414-8a32-dfea84d2460f\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.553179 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities\") pod \"a7272920-8e13-4414-8a32-dfea84d2460f\" (UID: \"a7272920-8e13-4414-8a32-dfea84d2460f\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.554328 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities" (OuterVolumeSpecName: "utilities") pod "a7272920-8e13-4414-8a32-dfea84d2460f" (UID: "a7272920-8e13-4414-8a32-dfea84d2460f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.559470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p" (OuterVolumeSpecName: "kube-api-access-8rm8p") pod "a7272920-8e13-4414-8a32-dfea84d2460f" (UID: "a7272920-8e13-4414-8a32-dfea84d2460f"). InnerVolumeSpecName "kube-api-access-8rm8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.656706 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rm8p\" (UniqueName: \"kubernetes.io/projected/a7272920-8e13-4414-8a32-dfea84d2460f-kube-api-access-8rm8p\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.656743 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.693092 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.693258 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7272920-8e13-4414-8a32-dfea84d2460f" (UID: "a7272920-8e13-4414-8a32-dfea84d2460f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.701583 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.707451 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.731103 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.736130 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757459 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities\") pod \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757501 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities\") pod \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757591 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qggc8\" (UniqueName: \"kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8\") pod \"1983ba6a-9da7-4d16-8135-1c928be5676b\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757623 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content\") pod \"37ef0e05-d551-4cd1-9399-be898e6a5c85\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757655 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca\") pod \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757707 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities\") pod \"37ef0e05-d551-4cd1-9399-be898e6a5c85\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757726 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content\") pod \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757749 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrs5p\" (UniqueName: \"kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p\") pod \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\" (UID: \"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757787 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgd86\" (UniqueName: \"kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86\") pod \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb4f\" (UniqueName: \"kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f\") pod \"37ef0e05-d551-4cd1-9399-be898e6a5c85\" (UID: \"37ef0e05-d551-4cd1-9399-be898e6a5c85\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757844 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content\") pod \"1983ba6a-9da7-4d16-8135-1c928be5676b\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities\") pod \"1983ba6a-9da7-4d16-8135-1c928be5676b\" (UID: \"1983ba6a-9da7-4d16-8135-1c928be5676b\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757908 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content\") pod \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757929 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics\") pod \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\" (UID: \"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.757948 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jht5l\" (UniqueName: \"kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l\") pod \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\" (UID: \"77513906-1d0e-4d29-a4d3-d6cc71e023a8\") " Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.758092 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7272920-8e13-4414-8a32-dfea84d2460f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.759128 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" (UID: "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.760623 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities" (OuterVolumeSpecName: "utilities") pod "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" (UID: "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.762109 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities" (OuterVolumeSpecName: "utilities") pod "77513906-1d0e-4d29-a4d3-d6cc71e023a8" (UID: "77513906-1d0e-4d29-a4d3-d6cc71e023a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.762041 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities" (OuterVolumeSpecName: "utilities") pod "37ef0e05-d551-4cd1-9399-be898e6a5c85" (UID: "37ef0e05-d551-4cd1-9399-be898e6a5c85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.762234 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities" (OuterVolumeSpecName: "utilities") pod "1983ba6a-9da7-4d16-8135-1c928be5676b" (UID: "1983ba6a-9da7-4d16-8135-1c928be5676b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.772487 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8" (OuterVolumeSpecName: "kube-api-access-qggc8") pod "1983ba6a-9da7-4d16-8135-1c928be5676b" (UID: "1983ba6a-9da7-4d16-8135-1c928be5676b"). InnerVolumeSpecName "kube-api-access-qggc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.772548 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l" (OuterVolumeSpecName: "kube-api-access-jht5l") pod "77513906-1d0e-4d29-a4d3-d6cc71e023a8" (UID: "77513906-1d0e-4d29-a4d3-d6cc71e023a8"). InnerVolumeSpecName "kube-api-access-jht5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.773106 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f" (OuterVolumeSpecName: "kube-api-access-5kb4f") pod "37ef0e05-d551-4cd1-9399-be898e6a5c85" (UID: "37ef0e05-d551-4cd1-9399-be898e6a5c85"). InnerVolumeSpecName "kube-api-access-5kb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.784112 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" (UID: "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.784217 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86" (OuterVolumeSpecName: "kube-api-access-xgd86") pod "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" (UID: "a2c5cd45-6030-4ba1-96fc-ffc82b00af1e"). InnerVolumeSpecName "kube-api-access-xgd86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.793112 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p" (OuterVolumeSpecName: "kube-api-access-mrs5p") pod "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" (UID: "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283"). InnerVolumeSpecName "kube-api-access-mrs5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859103 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37ef0e05-d551-4cd1-9399-be898e6a5c85" (UID: "37ef0e05-d551-4cd1-9399-be898e6a5c85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859181 4857 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859574 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jht5l\" (UniqueName: \"kubernetes.io/projected/77513906-1d0e-4d29-a4d3-d6cc71e023a8-kube-api-access-jht5l\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859733 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859851 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.859978 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qggc8\" (UniqueName: \"kubernetes.io/projected/1983ba6a-9da7-4d16-8135-1c928be5676b-kube-api-access-qggc8\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860057 4857 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860128 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860196 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrs5p\" (UniqueName: \"kubernetes.io/projected/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-kube-api-access-mrs5p\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860281 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgd86\" (UniqueName: \"kubernetes.io/projected/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e-kube-api-access-xgd86\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860366 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb4f\" (UniqueName: \"kubernetes.io/projected/37ef0e05-d551-4cd1-9399-be898e6a5c85-kube-api-access-5kb4f\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.860439 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.867825 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" (UID: "9b7db57b-a1ee-4fd5-b525-57c3b7eb8283"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.917416 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77513906-1d0e-4d29-a4d3-d6cc71e023a8" (UID: "77513906-1d0e-4d29-a4d3-d6cc71e023a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.944659 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1983ba6a-9da7-4d16-8135-1c928be5676b" (UID: "1983ba6a-9da7-4d16-8135-1c928be5676b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.964392 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1983ba6a-9da7-4d16-8135-1c928be5676b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.964452 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77513906-1d0e-4d29-a4d3-d6cc71e023a8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.964462 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37ef0e05-d551-4cd1-9399-be898e6a5c85-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:16 crc kubenswrapper[4857]: I0318 14:07:16.964473 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.012288 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cghkz" event={"ID":"9b7db57b-a1ee-4fd5-b525-57c3b7eb8283","Type":"ContainerDied","Data":"dae5e41b9e2cabe78af4140852a07126def56d09cef456ced526fc169154068a"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.012361 4857 scope.go:117] "RemoveContainer" containerID="ba950b18eb28811bab27379ddea7fecc93e8b3c8c7cbb36e3d1ebed7a8b4ca81" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.012564 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cghkz" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.018928 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dz4vq" event={"ID":"1983ba6a-9da7-4d16-8135-1c928be5676b","Type":"ContainerDied","Data":"e974a60ef5aee748172c8d2fa381e156df05ae8cbf65f6355b18707e0e51d6e7"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.019002 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dz4vq" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.023812 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.029581 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmqk2" event={"ID":"a7272920-8e13-4414-8a32-dfea84d2460f","Type":"ContainerDied","Data":"331598e380e48fc28bf571a4d5c6608ee3ca32e646c707c85f04e95232253156"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.029703 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmqk2" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.034390 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.037172 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzfl4" event={"ID":"37ef0e05-d551-4cd1-9399-be898e6a5c85","Type":"ContainerDied","Data":"12b9161d37acd741965229c239c7726c0a8659983d3c2dc38a28981831cc06f3"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.037310 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzfl4" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.040316 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8pg8" event={"ID":"77513906-1d0e-4d29-a4d3-d6cc71e023a8","Type":"ContainerDied","Data":"2c3761f41c3564236d10217747b4c9ebfe510d0a7729ac65e4ed7a30536d33ce"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.040517 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8pg8" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.051914 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" event={"ID":"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c","Type":"ContainerStarted","Data":"fc1203b3a729cafe8010b8e3d66f285038ac11e2ffbb80649c81c48d7c75d1c6"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.053043 4857 scope.go:117] "RemoveContainer" containerID="a2fac1f4719481063e8bce358ff7802a0ec5f434d58d46b223aa0997366bdd02" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.053118 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.060902 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.062111 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" event={"ID":"a2c5cd45-6030-4ba1-96fc-ffc82b00af1e","Type":"ContainerDied","Data":"6e05b8b52ad281994689c98753614bf0713030eb164e71b9cf271678a90d4206"} Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.062224 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kndt2" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065075 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b585n\" (UniqueName: \"kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n\") pod \"9c2eafeb-c191-4d62-ab06-2085407e44e5\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065749 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content\") pod \"9c2eafeb-c191-4d62-ab06-2085407e44e5\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065836 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities\") pod \"f911e035-9c03-4a95-8136-db8bd4e63e9b\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065892 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9rt\" (UniqueName: \"kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt\") pod \"f911e035-9c03-4a95-8136-db8bd4e63e9b\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065972 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content\") pod \"f911e035-9c03-4a95-8136-db8bd4e63e9b\" (UID: \"f911e035-9c03-4a95-8136-db8bd4e63e9b\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.065997 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities\") pod \"9c2eafeb-c191-4d62-ab06-2085407e44e5\" (UID: \"9c2eafeb-c191-4d62-ab06-2085407e44e5\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.068025 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities" (OuterVolumeSpecName: "utilities") pod "9c2eafeb-c191-4d62-ab06-2085407e44e5" (UID: "9c2eafeb-c191-4d62-ab06-2085407e44e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.069581 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities" (OuterVolumeSpecName: "utilities") pod "f911e035-9c03-4a95-8136-db8bd4e63e9b" (UID: "f911e035-9c03-4a95-8136-db8bd4e63e9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.071604 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n" (OuterVolumeSpecName: "kube-api-access-b585n") pod "9c2eafeb-c191-4d62-ab06-2085407e44e5" (UID: "9c2eafeb-c191-4d62-ab06-2085407e44e5"). InnerVolumeSpecName "kube-api-access-b585n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.072493 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt" (OuterVolumeSpecName: "kube-api-access-mb9rt") pod "f911e035-9c03-4a95-8136-db8bd4e63e9b" (UID: "f911e035-9c03-4a95-8136-db8bd4e63e9b"). InnerVolumeSpecName "kube-api-access-mb9rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.077296 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.084267 4857 scope.go:117] "RemoveContainer" containerID="014e065bf5b236c2e0825caad82e605580730b831707b051cad6ecebca748eb6" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.108437 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f911e035-9c03-4a95-8136-db8bd4e63e9b" (UID: "f911e035-9c03-4a95-8136-db8bd4e63e9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.113493 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podStartSLOduration=3.113459596 podStartE2EDuration="3.113459596s" podCreationTimestamp="2026-03-18 14:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:07:17.099246633 +0000 UTC m=+421.228375090" watchObservedRunningTime="2026-03-18 14:07:17.113459596 +0000 UTC m=+421.242588053" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.122941 4857 scope.go:117] "RemoveContainer" containerID="b2c6c7ab732f33cb9e9769c790eb59d4b5cdb12bb30fc84453ab5be402338261" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.145406 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.145536 4857 scope.go:117] "RemoveContainer" containerID="4a63de8df929d1c369414c05e0778aa673bc2c625dc4e2d7864795bab4da30d3" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.152735 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cghkz"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.164948 4857 scope.go:117] "RemoveContainer" containerID="fd37efbb538c041876a93ee5f2163b7e9db5ff2c56f85a394859c2d513d04024" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.169033 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b585n\" (UniqueName: \"kubernetes.io/projected/9c2eafeb-c191-4d62-ab06-2085407e44e5-kube-api-access-b585n\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.169079 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.169098 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9rt\" (UniqueName: \"kubernetes.io/projected/f911e035-9c03-4a95-8136-db8bd4e63e9b-kube-api-access-mb9rt\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.169108 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f911e035-9c03-4a95-8136-db8bd4e63e9b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.169121 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.194584 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" path="/var/lib/kubelet/pods/9b7db57b-a1ee-4fd5-b525-57c3b7eb8283/volumes" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.199525 4857 scope.go:117] "RemoveContainer" containerID="aa2697a80a10a323dac7b8f2726805cafb7735429ae521dee61202d1304ca69d" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.229971 4857 scope.go:117] "RemoveContainer" containerID="6fd660858e01b81d2306215fbd06a9be62b6b64347d25f39e974f0ac2756598a" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.230427 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.252575 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzfl4"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.261271 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c2eafeb-c191-4d62-ab06-2085407e44e5" (UID: "9c2eafeb-c191-4d62-ab06-2085407e44e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.261683 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.265494 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmqk2"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.267040 4857 scope.go:117] "RemoveContainer" containerID="d33696b0fcdac1a6e2c56ee85a1bcabad1fe3c0e82f8ddd64b7318c7e1de7793" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.269555 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.272960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z842t\" (UniqueName: \"kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t\") pod \"510c03dc-bd76-40f3-abee-55e80cc97ddb\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.273074 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities\") pod \"510c03dc-bd76-40f3-abee-55e80cc97ddb\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.273138 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content\") pod \"510c03dc-bd76-40f3-abee-55e80cc97ddb\" (UID: \"510c03dc-bd76-40f3-abee-55e80cc97ddb\") " Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.273553 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2eafeb-c191-4d62-ab06-2085407e44e5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.274369 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities" (OuterVolumeSpecName: "utilities") pod "510c03dc-bd76-40f3-abee-55e80cc97ddb" (UID: "510c03dc-bd76-40f3-abee-55e80cc97ddb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.276001 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kndt2"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.277823 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t" (OuterVolumeSpecName: "kube-api-access-z842t") pod "510c03dc-bd76-40f3-abee-55e80cc97ddb" (UID: "510c03dc-bd76-40f3-abee-55e80cc97ddb"). InnerVolumeSpecName "kube-api-access-z842t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.280585 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.285868 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dz4vq"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.287408 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.290907 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q8pg8"] Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.300361 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "510c03dc-bd76-40f3-abee-55e80cc97ddb" (UID: "510c03dc-bd76-40f3-abee-55e80cc97ddb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.307166 4857 scope.go:117] "RemoveContainer" containerID="1b49257afff5cff9f69e5fefa5fbee68b4e2f14bc73c57bbfe50cc90c4e2ffa8" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.322881 4857 scope.go:117] "RemoveContainer" containerID="d7ddbdb7031b4caadd5dab8ece79a91eba0fc712310a5dc2dbe7b4dd5ea6d22c" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.340266 4857 scope.go:117] "RemoveContainer" containerID="c68420996ef7af7e9b1a79f72cc65ecb36965ffe0514886d2ee871adf44df785" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.340489 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.355459 4857 scope.go:117] "RemoveContainer" containerID="9ece351e3a9cd811f555a3df02efdae11e74d92e2f63f8aa4a8b0aef69d4d4c9" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.369467 4857 scope.go:117] "RemoveContainer" containerID="f95f63311a0844bf9d5258bf521e469709761ea046a292d89b102642348e2dde" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.375122 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.375212 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/510c03dc-bd76-40f3-abee-55e80cc97ddb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.375225 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z842t\" (UniqueName: \"kubernetes.io/projected/510c03dc-bd76-40f3-abee-55e80cc97ddb-kube-api-access-z842t\") on node \"crc\" DevicePath \"\"" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.384949 4857 scope.go:117] "RemoveContainer" containerID="e19b88929a72f0b151ff3c9408e20237c0608079b0d351ce94ded6dd7062dacb" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.398559 4857 scope.go:117] "RemoveContainer" containerID="b6a745640825244382102719f62339e633eb094ae46221f41cd6ca61a83ede65" Mar 18 14:07:17 crc kubenswrapper[4857]: I0318 14:07:17.445857 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.072402 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c89xj" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.072516 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c89xj" event={"ID":"f911e035-9c03-4a95-8136-db8bd4e63e9b","Type":"ContainerDied","Data":"e6e8626a3e7725bc57f2cf89f82ce6c8ab6d00ef09939adbbea6eb834d8fec59"} Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.072593 4857 scope.go:117] "RemoveContainer" containerID="f810ec1ba2d6d7aa7a6c3de2f8d60f311c51ed09a1b1feea921bce8272e07623" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.079651 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l9sbh" event={"ID":"510c03dc-bd76-40f3-abee-55e80cc97ddb","Type":"ContainerDied","Data":"a480e4857b49ee475b6a80df2655d558a1f5ee249b65a372eee2a2d64d9e4c36"} Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.079709 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l9sbh" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.101862 4857 scope.go:117] "RemoveContainer" containerID="c1f3b2bc4264a1b3a1f09df5ab9848cfb7070676e148993498bb7950873b109d" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.103795 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2g48f" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.104577 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2g48f" event={"ID":"9c2eafeb-c191-4d62-ab06-2085407e44e5","Type":"ContainerDied","Data":"618c731494aab00a22f93bbd2fdbb8b746f2dbd36bd4b17e03e8f0b3d7add7e3"} Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.105276 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.116045 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c89xj"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.119614 4857 scope.go:117] "RemoveContainer" containerID="7dce3cda667cceaccdac133e0339c8101f877d800a628cf73c362ea593b143c1" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.128985 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.137581 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l9sbh"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.145869 4857 scope.go:117] "RemoveContainer" containerID="87c3ff8cbfd888dbd57bc57b8da772bd5ff6cc39f6d4d059acc010408f91feca" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.147661 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.154482 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2g48f"] Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.162147 4857 scope.go:117] "RemoveContainer" containerID="0a37bd8d36dac9ffd6a72634e06cb27c905c873c0ce0105b4cc7a2fbf50c14b5" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.191121 4857 scope.go:117] "RemoveContainer" containerID="4fc80e37d1feaf06e64f2acd4d7dd9d2c29b6a8eb3e7eb7d5545333c442ec6c1" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.205802 4857 scope.go:117] "RemoveContainer" containerID="a6ea4af12158e67b8c6b7d32cff44f35f03dcd46f7af116e30ab61bd92f7596c" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.223147 4857 scope.go:117] "RemoveContainer" containerID="4606e4f0f09d379b1168ce9bc9679bac243362b7ff31ffce124bbe5fdcebf653" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.242882 4857 scope.go:117] "RemoveContainer" containerID="1313a1b817c3a0ca16c4ff79007b4c8eea00534fe3fe39e9f9734d1469c87110" Mar 18 14:07:18 crc kubenswrapper[4857]: I0318 14:07:18.539431 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.171982 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" path="/var/lib/kubelet/pods/1983ba6a-9da7-4d16-8135-1c928be5676b/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.173552 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" path="/var/lib/kubelet/pods/37ef0e05-d551-4cd1-9399-be898e6a5c85/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.174715 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" path="/var/lib/kubelet/pods/510c03dc-bd76-40f3-abee-55e80cc97ddb/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.176577 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" path="/var/lib/kubelet/pods/77513906-1d0e-4d29-a4d3-d6cc71e023a8/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.177847 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" path="/var/lib/kubelet/pods/9c2eafeb-c191-4d62-ab06-2085407e44e5/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.179508 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" path="/var/lib/kubelet/pods/a2c5cd45-6030-4ba1-96fc-ffc82b00af1e/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.180270 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" path="/var/lib/kubelet/pods/a7272920-8e13-4414-8a32-dfea84d2460f/volumes" Mar 18 14:07:19 crc kubenswrapper[4857]: I0318 14:07:19.181335 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" path="/var/lib/kubelet/pods/f911e035-9c03-4a95-8136-db8bd4e63e9b/volumes" Mar 18 14:07:20 crc kubenswrapper[4857]: I0318 14:07:20.826498 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 14:07:21 crc kubenswrapper[4857]: I0318 14:07:21.056917 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 14:07:21 crc kubenswrapper[4857]: I0318 14:07:21.075120 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 14:07:22 crc kubenswrapper[4857]: I0318 14:07:22.377365 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 18 14:07:22 crc kubenswrapper[4857]: I0318 14:07:22.814712 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 14:07:23 crc kubenswrapper[4857]: I0318 14:07:23.039786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 14:07:23 crc kubenswrapper[4857]: I0318 14:07:23.925125 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 14:07:24 crc kubenswrapper[4857]: I0318 14:07:24.965830 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 14:07:26 crc kubenswrapper[4857]: I0318 14:07:26.406159 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 14:07:27 crc kubenswrapper[4857]: I0318 14:07:27.311952 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 14:07:29 crc kubenswrapper[4857]: I0318 14:07:29.551949 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 18 14:07:30 crc kubenswrapper[4857]: I0318 14:07:30.216623 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 14:07:30 crc kubenswrapper[4857]: I0318 14:07:30.629570 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 14:07:31 crc kubenswrapper[4857]: I0318 14:07:31.511466 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 14:07:32 crc kubenswrapper[4857]: I0318 14:07:32.354221 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 14:07:35 crc kubenswrapper[4857]: I0318 14:07:35.734814 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 18 14:07:57 crc kubenswrapper[4857]: I0318 14:07:57.038858 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:07:57 crc kubenswrapper[4857]: I0318 14:07:57.039998 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171119 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564048-fjxnk"] Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171790 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171821 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171845 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171853 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171861 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171868 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171877 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171884 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171893 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171899 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171907 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171914 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171923 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171931 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171942 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171947 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171956 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171964 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171973 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171978 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.171988 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.171994 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172004 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172010 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172019 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172025 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172033 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172039 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172049 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172054 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172064 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172071 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172081 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172088 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172099 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172105 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172115 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172122 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="extract-content" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172130 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172136 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172145 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172152 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172158 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172164 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172172 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172177 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="extract-utilities" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172185 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172190 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: E0318 14:08:00.172198 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172204 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172343 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="510c03dc-bd76-40f3-abee-55e80cc97ddb" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172358 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="77513906-1d0e-4d29-a4d3-d6cc71e023a8" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172366 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7db57b-a1ee-4fd5-b525-57c3b7eb8283" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172373 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ef0e05-d551-4cd1-9399-be898e6a5c85" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172381 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f911e035-9c03-4a95-8136-db8bd4e63e9b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172390 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1983ba6a-9da7-4d16-8135-1c928be5676b" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172397 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7272920-8e13-4414-8a32-dfea84d2460f" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172405 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2eafeb-c191-4d62-ab06-2085407e44e5" containerName="registry-server" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.172412 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c5cd45-6030-4ba1-96fc-ffc82b00af1e" containerName="marketplace-operator" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.173020 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.177598 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.177690 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.178009 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.198790 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564048-fjxnk"] Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.219135 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrqx\" (UniqueName: \"kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx\") pod \"auto-csr-approver-29564048-fjxnk\" (UID: \"09d743e0-c9e4-4682-bfd1-e80f5522b013\") " pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.320705 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxrqx\" (UniqueName: \"kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx\") pod \"auto-csr-approver-29564048-fjxnk\" (UID: \"09d743e0-c9e4-4682-bfd1-e80f5522b013\") " pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.343672 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxrqx\" (UniqueName: \"kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx\") pod \"auto-csr-approver-29564048-fjxnk\" (UID: \"09d743e0-c9e4-4682-bfd1-e80f5522b013\") " pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.497696 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:00 crc kubenswrapper[4857]: I0318 14:08:00.996128 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564048-fjxnk"] Mar 18 14:08:01 crc kubenswrapper[4857]: I0318 14:08:01.846688 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" event={"ID":"09d743e0-c9e4-4682-bfd1-e80f5522b013","Type":"ContainerStarted","Data":"b154a11a4353d613f30fa85e04207ff430e48524f0ef41b8864aab2bf6590a98"} Mar 18 14:08:02 crc kubenswrapper[4857]: I0318 14:08:02.866463 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" event={"ID":"09d743e0-c9e4-4682-bfd1-e80f5522b013","Type":"ContainerStarted","Data":"628bdf9f8205015d3581ce1098abd36c6903e8a90cfed3b38cdca5a80d2dd441"} Mar 18 14:08:03 crc kubenswrapper[4857]: I0318 14:08:03.875028 4857 generic.go:334] "Generic (PLEG): container finished" podID="09d743e0-c9e4-4682-bfd1-e80f5522b013" containerID="628bdf9f8205015d3581ce1098abd36c6903e8a90cfed3b38cdca5a80d2dd441" exitCode=0 Mar 18 14:08:03 crc kubenswrapper[4857]: I0318 14:08:03.875397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" event={"ID":"09d743e0-c9e4-4682-bfd1-e80f5522b013","Type":"ContainerDied","Data":"628bdf9f8205015d3581ce1098abd36c6903e8a90cfed3b38cdca5a80d2dd441"} Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.124854 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.285386 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxrqx\" (UniqueName: \"kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx\") pod \"09d743e0-c9e4-4682-bfd1-e80f5522b013\" (UID: \"09d743e0-c9e4-4682-bfd1-e80f5522b013\") " Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.291532 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx" (OuterVolumeSpecName: "kube-api-access-rxrqx") pod "09d743e0-c9e4-4682-bfd1-e80f5522b013" (UID: "09d743e0-c9e4-4682-bfd1-e80f5522b013"). InnerVolumeSpecName "kube-api-access-rxrqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.386947 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxrqx\" (UniqueName: \"kubernetes.io/projected/09d743e0-c9e4-4682-bfd1-e80f5522b013-kube-api-access-rxrqx\") on node \"crc\" DevicePath \"\"" Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.890239 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" event={"ID":"09d743e0-c9e4-4682-bfd1-e80f5522b013","Type":"ContainerDied","Data":"b154a11a4353d613f30fa85e04207ff430e48524f0ef41b8864aab2bf6590a98"} Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.890311 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b154a11a4353d613f30fa85e04207ff430e48524f0ef41b8864aab2bf6590a98" Mar 18 14:08:05 crc kubenswrapper[4857]: I0318 14:08:05.890402 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564048-fjxnk" Mar 18 14:08:06 crc kubenswrapper[4857]: I0318 14:08:06.004049 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564042-j5cmc"] Mar 18 14:08:06 crc kubenswrapper[4857]: I0318 14:08:06.007114 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564042-j5cmc"] Mar 18 14:08:07 crc kubenswrapper[4857]: I0318 14:08:07.174988 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="287df787-86a7-4a56-b5a1-fb55b6bed91b" path="/var/lib/kubelet/pods/287df787-86a7-4a56-b5a1-fb55b6bed91b/volumes" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.965532 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7"] Mar 18 14:08:10 crc kubenswrapper[4857]: E0318 14:08:10.966513 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d743e0-c9e4-4682-bfd1-e80f5522b013" containerName="oc" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.966548 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d743e0-c9e4-4682-bfd1-e80f5522b013" containerName="oc" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.966723 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d743e0-c9e4-4682-bfd1-e80f5522b013" containerName="oc" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.967723 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.970993 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.971304 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.971311 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.971506 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.976375 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 14:08:10 crc kubenswrapper[4857]: I0318 14:08:10.981058 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7"] Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.157221 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/806d4ef9-6274-4c9c-8329-fb4bbf369a80-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.157407 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6cf\" (UniqueName: \"kubernetes.io/projected/806d4ef9-6274-4c9c-8329-fb4bbf369a80-kube-api-access-5w6cf\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.157571 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/806d4ef9-6274-4c9c-8329-fb4bbf369a80-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.259257 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/806d4ef9-6274-4c9c-8329-fb4bbf369a80-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.259353 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6cf\" (UniqueName: \"kubernetes.io/projected/806d4ef9-6274-4c9c-8329-fb4bbf369a80-kube-api-access-5w6cf\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.259426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/806d4ef9-6274-4c9c-8329-fb4bbf369a80-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.260984 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/806d4ef9-6274-4c9c-8329-fb4bbf369a80-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.270904 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/806d4ef9-6274-4c9c-8329-fb4bbf369a80-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.383590 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6cf\" (UniqueName: \"kubernetes.io/projected/806d4ef9-6274-4c9c-8329-fb4bbf369a80-kube-api-access-5w6cf\") pod \"cluster-monitoring-operator-6d5b84845-r4qn7\" (UID: \"806d4ef9-6274-4c9c-8329-fb4bbf369a80\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:11 crc kubenswrapper[4857]: I0318 14:08:11.597207 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" Mar 18 14:08:12 crc kubenswrapper[4857]: I0318 14:08:12.112322 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7"] Mar 18 14:08:13 crc kubenswrapper[4857]: I0318 14:08:13.063002 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" event={"ID":"806d4ef9-6274-4c9c-8329-fb4bbf369a80","Type":"ContainerStarted","Data":"165b61b0c54c81e55bd0994d9b84e0eaf00bbbe79e80d904daf51cbd2a238e10"} Mar 18 14:08:16 crc kubenswrapper[4857]: I0318 14:08:16.914829 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd"] Mar 18 14:08:16 crc kubenswrapper[4857]: I0318 14:08:16.916597 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:16 crc kubenswrapper[4857]: I0318 14:08:16.920204 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 14:08:16 crc kubenswrapper[4857]: I0318 14:08:16.920521 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-w65dc" Mar 18 14:08:16 crc kubenswrapper[4857]: I0318 14:08:16.929362 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd"] Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.019933 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rfczd\" (UID: \"d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.100514 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" event={"ID":"806d4ef9-6274-4c9c-8329-fb4bbf369a80","Type":"ContainerStarted","Data":"0f1f002aaae38c68fcff4f8664e10d0aa1763ac6a155fcd1cf1a6be8bd393f41"} Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.121411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rfczd\" (UID: \"d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.128139 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-rfczd\" (UID: \"d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.130498 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r4qn7" podStartSLOduration=3.15956998 podStartE2EDuration="7.130444803s" podCreationTimestamp="2026-03-18 14:08:10 +0000 UTC" firstStartedPulling="2026-03-18 14:08:12.152737195 +0000 UTC m=+476.281865652" lastFinishedPulling="2026-03-18 14:08:16.123612008 +0000 UTC m=+480.252740475" observedRunningTime="2026-03-18 14:08:17.120660461 +0000 UTC m=+481.249788918" watchObservedRunningTime="2026-03-18 14:08:17.130444803 +0000 UTC m=+481.259573260" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.239417 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-w65dc" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.247690 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:17 crc kubenswrapper[4857]: I0318 14:08:17.899442 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd"] Mar 18 14:08:18 crc kubenswrapper[4857]: I0318 14:08:18.108120 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" event={"ID":"d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa","Type":"ContainerStarted","Data":"e2eb936afae31f0baf00bd34d1a7aa1c8b2d36ea86ecc5b59a44e3202320ad12"} Mar 18 14:08:20 crc kubenswrapper[4857]: I0318 14:08:20.123019 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" event={"ID":"d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa","Type":"ContainerStarted","Data":"5e2e96812efe19b1639a880ddd9e1f61c5e5f77268fbb316ec64f867cc3e8ae8"} Mar 18 14:08:20 crc kubenswrapper[4857]: I0318 14:08:20.124083 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:20 crc kubenswrapper[4857]: I0318 14:08:20.130453 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" Mar 18 14:08:20 crc kubenswrapper[4857]: I0318 14:08:20.144611 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" podStartSLOduration=2.70555662 podStartE2EDuration="4.144563611s" podCreationTimestamp="2026-03-18 14:08:16 +0000 UTC" firstStartedPulling="2026-03-18 14:08:17.906481839 +0000 UTC m=+482.035610296" lastFinishedPulling="2026-03-18 14:08:19.34548883 +0000 UTC m=+483.474617287" observedRunningTime="2026-03-18 14:08:20.141487142 +0000 UTC m=+484.270615619" watchObservedRunningTime="2026-03-18 14:08:20.144563611 +0000 UTC m=+484.273692068" Mar 18 14:08:20 crc kubenswrapper[4857]: I0318 14:08:20.998399 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zv2t7"] Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.000369 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.169336 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-258c7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.174416 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.174517 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.174672 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.213456 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zv2t7"] Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.267786 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.267853 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.267885 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82gf5\" (UniqueName: \"kubernetes.io/projected/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-kube-api-access-82gf5\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.267963 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.297913 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs9hw"] Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.298810 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.314354 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs9hw"] Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.369434 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.369638 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.370778 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.370856 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82gf5\" (UniqueName: \"kubernetes.io/projected/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-kube-api-access-82gf5\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.373790 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.377413 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.378486 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.392042 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82gf5\" (UniqueName: \"kubernetes.io/projected/26e4c4bc-7edb-45a7-8856-3a9e0146fcea-kube-api-access-82gf5\") pod \"prometheus-operator-db54df47d-zv2t7\" (UID: \"26e4c4bc-7edb-45a7-8856-3a9e0146fcea\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.637621 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638050 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36a23d85-3d22-400f-aae8-65d1656e2f6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638091 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-bound-sa-token\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638136 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36a23d85-3d22-400f-aae8-65d1656e2f6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638173 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-certificates\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638201 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bgl\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-kube-api-access-85bgl\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638251 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-tls\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638289 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-trusted-ca\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.638867 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.674185 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739430 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36a23d85-3d22-400f-aae8-65d1656e2f6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739514 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-bound-sa-token\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739592 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36a23d85-3d22-400f-aae8-65d1656e2f6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739626 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-certificates\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739655 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85bgl\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-kube-api-access-85bgl\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739711 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-tls\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.739779 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-trusted-ca\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.740216 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/36a23d85-3d22-400f-aae8-65d1656e2f6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.741530 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-certificates\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.743693 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/36a23d85-3d22-400f-aae8-65d1656e2f6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.747328 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-registry-tls\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.755206 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/36a23d85-3d22-400f-aae8-65d1656e2f6d-trusted-ca\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.761172 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85bgl\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-kube-api-access-85bgl\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.765733 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36a23d85-3d22-400f-aae8-65d1656e2f6d-bound-sa-token\") pod \"image-registry-66df7c8f76-vs9hw\" (UID: \"36a23d85-3d22-400f-aae8-65d1656e2f6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.914475 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:21 crc kubenswrapper[4857]: I0318 14:08:21.942630 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zv2t7"] Mar 18 14:08:21 crc kubenswrapper[4857]: W0318 14:08:21.946289 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e4c4bc_7edb_45a7_8856_3a9e0146fcea.slice/crio-7ca9a41259ec76675b13cf9da60d2720087c98f3484a7ba9940ad106d67971a1 WatchSource:0}: Error finding container 7ca9a41259ec76675b13cf9da60d2720087c98f3484a7ba9940ad106d67971a1: Status 404 returned error can't find the container with id 7ca9a41259ec76675b13cf9da60d2720087c98f3484a7ba9940ad106d67971a1 Mar 18 14:08:22 crc kubenswrapper[4857]: I0318 14:08:22.416951 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" event={"ID":"26e4c4bc-7edb-45a7-8856-3a9e0146fcea","Type":"ContainerStarted","Data":"7ca9a41259ec76675b13cf9da60d2720087c98f3484a7ba9940ad106d67971a1"} Mar 18 14:08:22 crc kubenswrapper[4857]: I0318 14:08:22.417980 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vs9hw"] Mar 18 14:08:23 crc kubenswrapper[4857]: I0318 14:08:23.560665 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" event={"ID":"36a23d85-3d22-400f-aae8-65d1656e2f6d","Type":"ContainerStarted","Data":"067ff96279f5f0b9e1aba3756ee4ef0836f1e8e1df6daf848be36c34562c50fd"} Mar 18 14:08:23 crc kubenswrapper[4857]: I0318 14:08:23.561185 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" event={"ID":"36a23d85-3d22-400f-aae8-65d1656e2f6d","Type":"ContainerStarted","Data":"f1ff87ef2442ed1e77925643c5fa1a2cc3250dbe5113aef130bb67e2f2b07b50"} Mar 18 14:08:23 crc kubenswrapper[4857]: I0318 14:08:23.561424 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:23 crc kubenswrapper[4857]: I0318 14:08:23.590681 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" podStartSLOduration=2.59065741 podStartE2EDuration="2.59065741s" podCreationTimestamp="2026-03-18 14:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:08:23.584642905 +0000 UTC m=+487.713771372" watchObservedRunningTime="2026-03-18 14:08:23.59065741 +0000 UTC m=+487.719785867" Mar 18 14:08:27 crc kubenswrapper[4857]: I0318 14:08:27.870020 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:08:27 crc kubenswrapper[4857]: I0318 14:08:27.870769 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:08:27 crc kubenswrapper[4857]: I0318 14:08:27.893852 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:08:27 crc kubenswrapper[4857]: I0318 14:08:27.893916 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:08:27 crc kubenswrapper[4857]: E0318 14:08:27.930902 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.764s" Mar 18 14:08:27 crc kubenswrapper[4857]: I0318 14:08:27.987825 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.000364 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.072847 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" event={"ID":"26e4c4bc-7edb-45a7-8856-3a9e0146fcea","Type":"ContainerStarted","Data":"acdaa7cf9cbc75be520a3579ed528d804dc53535e692a1d76be971fab1032124"} Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.120925 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9sl8"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.124275 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.124573 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.125772 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.135439 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.135550 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.138836 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9sl8"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.165837 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166778 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-utilities\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166853 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnwf\" (UniqueName: \"kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166882 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhdf\" (UniqueName: \"kubernetes.io/projected/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-kube-api-access-hlhdf\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166914 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166943 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-catalog-content\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.166970 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.172015 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7qbr"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.173919 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.176223 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.188933 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7qbr"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268455 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268534 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-utilities\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268624 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqnwf\" (UniqueName: \"kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268647 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhdf\" (UniqueName: \"kubernetes.io/projected/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-kube-api-access-hlhdf\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268696 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-catalog-content\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268718 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268739 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-utilities\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.268810 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qjf\" (UniqueName: \"kubernetes.io/projected/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-kube-api-access-98qjf\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.269017 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-catalog-content\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.269223 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.269264 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-utilities\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.269442 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.269565 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-catalog-content\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.290863 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhdf\" (UniqueName: \"kubernetes.io/projected/cb7efbe1-5cfd-4ddb-a334-fae43107aafd-kube-api-access-hlhdf\") pod \"redhat-marketplace-f9sl8\" (UID: \"cb7efbe1-5cfd-4ddb-a334-fae43107aafd\") " pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.290900 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqnwf\" (UniqueName: \"kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf\") pod \"community-operators-z72sl\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.371053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-catalog-content\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.371115 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-utilities\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.371135 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98qjf\" (UniqueName: \"kubernetes.io/projected/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-kube-api-access-98qjf\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.371634 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-catalog-content\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.371891 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-utilities\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.390033 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98qjf\" (UniqueName: \"kubernetes.io/projected/bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900-kube-api-access-98qjf\") pod \"redhat-operators-b7qbr\" (UID: \"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900\") " pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.436337 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zl78l"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.437955 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.440354 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.453405 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl78l"] Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.472474 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-utilities\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.472597 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdhw\" (UniqueName: \"kubernetes.io/projected/155a767b-458f-42b5-86f8-f73f4d585ee0-kube-api-access-xcdhw\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.472684 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-catalog-content\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.482944 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.507702 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.518339 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.575313 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-utilities\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.574684 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-utilities\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.575446 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcdhw\" (UniqueName: \"kubernetes.io/projected/155a767b-458f-42b5-86f8-f73f4d585ee0-kube-api-access-xcdhw\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.575931 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-catalog-content\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.576304 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/155a767b-458f-42b5-86f8-f73f4d585ee0-catalog-content\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.597001 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcdhw\" (UniqueName: \"kubernetes.io/projected/155a767b-458f-42b5-86f8-f73f4d585ee0-kube-api-access-xcdhw\") pod \"certified-operators-zl78l\" (UID: \"155a767b-458f-42b5-86f8-f73f4d585ee0\") " pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.760714 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:28 crc kubenswrapper[4857]: I0318 14:08:28.994071 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl78l"] Mar 18 14:08:29 crc kubenswrapper[4857]: W0318 14:08:29.009129 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod155a767b_458f_42b5_86f8_f73f4d585ee0.slice/crio-b3dba81f4658d63cf0b227c2ccd20dc3eb6ec67e5dd168e01812a3910d52a8f8 WatchSource:0}: Error finding container b3dba81f4658d63cf0b227c2ccd20dc3eb6ec67e5dd168e01812a3910d52a8f8: Status 404 returned error can't find the container with id b3dba81f4658d63cf0b227c2ccd20dc3eb6ec67e5dd168e01812a3910d52a8f8 Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.032407 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9sl8"] Mar 18 14:08:29 crc kubenswrapper[4857]: W0318 14:08:29.036462 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb7efbe1_5cfd_4ddb_a334_fae43107aafd.slice/crio-38f95f2c786f248d60a76bd06d349a3b2e68332a115e3b8ec8e4a868acb15f2c WatchSource:0}: Error finding container 38f95f2c786f248d60a76bd06d349a3b2e68332a115e3b8ec8e4a868acb15f2c: Status 404 returned error can't find the container with id 38f95f2c786f248d60a76bd06d349a3b2e68332a115e3b8ec8e4a868acb15f2c Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.059191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerStarted","Data":"38f95f2c786f248d60a76bd06d349a3b2e68332a115e3b8ec8e4a868acb15f2c"} Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.064029 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" event={"ID":"26e4c4bc-7edb-45a7-8856-3a9e0146fcea","Type":"ContainerStarted","Data":"d93065e478cc4887237aa25318842a15f811ba4d76c9b684c0573d27ea1cfd8f"} Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.069429 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerStarted","Data":"b3dba81f4658d63cf0b227c2ccd20dc3eb6ec67e5dd168e01812a3910d52a8f8"} Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.096536 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.100941 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7qbr"] Mar 18 14:08:29 crc kubenswrapper[4857]: I0318 14:08:29.105376 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-zv2t7" podStartSLOduration=6.03435111 podStartE2EDuration="9.105349274s" podCreationTimestamp="2026-03-18 14:08:20 +0000 UTC" firstStartedPulling="2026-03-18 14:08:21.949258334 +0000 UTC m=+486.078386791" lastFinishedPulling="2026-03-18 14:08:25.020256508 +0000 UTC m=+489.149384955" observedRunningTime="2026-03-18 14:08:29.091500707 +0000 UTC m=+493.220629164" watchObservedRunningTime="2026-03-18 14:08:29.105349274 +0000 UTC m=+493.234477731" Mar 18 14:08:29 crc kubenswrapper[4857]: W0318 14:08:29.110536 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdc29e5d_a2b4_4260_bbca_f5e1e5cb4900.slice/crio-52f40b670a32fced89425c0caa84f8836b7022867eb9fa8510a37699440877fc WatchSource:0}: Error finding container 52f40b670a32fced89425c0caa84f8836b7022867eb9fa8510a37699440877fc: Status 404 returned error can't find the container with id 52f40b670a32fced89425c0caa84f8836b7022867eb9fa8510a37699440877fc Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.084554 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerDied","Data":"1dcf310c885817f3748da77729c2418279f69b69d5cace618f12a06091a09e76"} Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.084580 4857 generic.go:334] "Generic (PLEG): container finished" podID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerID="1dcf310c885817f3748da77729c2418279f69b69d5cace618f12a06091a09e76" exitCode=0 Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.084779 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerStarted","Data":"26bec054273a1e10b100d2d74ba8f7c495da190ae26ec044582380e5a815b1ee"} Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.093081 4857 generic.go:334] "Generic (PLEG): container finished" podID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerID="3fad461d8df1e3ce66517ec7f80967975706316b8ade51e70e2c8daaea80ee7c" exitCode=0 Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.093518 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerDied","Data":"3fad461d8df1e3ce66517ec7f80967975706316b8ade51e70e2c8daaea80ee7c"} Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.094999 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerStarted","Data":"52f40b670a32fced89425c0caa84f8836b7022867eb9fa8510a37699440877fc"} Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.108925 4857 generic.go:334] "Generic (PLEG): container finished" podID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerID="2e1e655581e75f7ca66d7dac3416f2426d98595ecc4198484c094fdae4d4a7af" exitCode=0 Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.109147 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerDied","Data":"2e1e655581e75f7ca66d7dac3416f2426d98595ecc4198484c094fdae4d4a7af"} Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.114904 4857 generic.go:334] "Generic (PLEG): container finished" podID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerID="ce4ea5d80ec57931a81e1f068505702ccb73ef79dbb1a9035ced69a831cd5cb1" exitCode=0 Mar 18 14:08:30 crc kubenswrapper[4857]: I0318 14:08:30.115564 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerDied","Data":"ce4ea5d80ec57931a81e1f068505702ccb73ef79dbb1a9035ced69a831cd5cb1"} Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.125602 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerStarted","Data":"45dc3f8d217338ab84b73a4924b457e6a90ab4dca567936df0e59efb067c6d98"} Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.659440 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-wqb6t"] Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.661578 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.668128 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.668154 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.673984 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n"] Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.674778 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-92j2x" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.675501 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.680244 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.680247 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.680366 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-fgc87" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.693198 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n"] Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.740591 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn"] Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.747819 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-wtmp\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.748257 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.751597 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-bz5bn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.752958 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.754786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755191 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-tls\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755235 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755302 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swv6g\" (UniqueName: \"kubernetes.io/projected/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-kube-api-access-swv6g\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755426 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-textfile\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755496 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755525 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xf6s\" (UniqueName: \"kubernetes.io/projected/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-kube-api-access-9xf6s\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755681 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755720 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755795 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-sys\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.755844 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-metrics-client-ca\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.757560 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-root\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.757990 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.784113 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn"] Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.858949 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-wtmp\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859007 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-tls\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859034 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859091 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swv6g\" (UniqueName: \"kubernetes.io/projected/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-kube-api-access-swv6g\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859443 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-wtmp\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859690 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859809 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5zbg\" (UniqueName: \"kubernetes.io/projected/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-api-access-t5zbg\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.859844 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-textfile\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860267 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-textfile\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860326 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860460 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xf6s\" (UniqueName: \"kubernetes.io/projected/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-kube-api-access-9xf6s\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860517 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860892 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860929 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860954 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-sys\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.860975 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-metrics-client-ca\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861004 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861029 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861052 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861074 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-root\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861138 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-root\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: E0318 14:08:31.860850 4857 secret.go:188] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 18 14:08:31 crc kubenswrapper[4857]: E0318 14:08:31.861244 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls podName:e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19 nodeName:}" failed. No retries permitted until 2026-03-18 14:08:32.361209247 +0000 UTC m=+496.490337704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls") pod "openshift-state-metrics-566fddb674-2mn2n" (UID: "e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19") : secret "openshift-state-metrics-tls" not found Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.861438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.862123 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-sys\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.862150 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-metrics-client-ca\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.870615 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.881470 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.883522 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-node-exporter-tls\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.886845 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xf6s\" (UniqueName: \"kubernetes.io/projected/0b2122df-d225-43c8-87ec-38dc6c7ad5e5-kube-api-access-9xf6s\") pod \"node-exporter-wqb6t\" (UID: \"0b2122df-d225-43c8-87ec-38dc6c7ad5e5\") " pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.887880 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swv6g\" (UniqueName: \"kubernetes.io/projected/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-kube-api-access-swv6g\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5zbg\" (UniqueName: \"kubernetes.io/projected/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-api-access-t5zbg\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962592 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962641 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.962708 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.963745 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.964128 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.964608 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.983090 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.983204 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-wqb6t" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.986537 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5zbg\" (UniqueName: \"kubernetes.io/projected/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-api-access-t5zbg\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:31 crc kubenswrapper[4857]: I0318 14:08:31.989605 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/06fa41c7-1ab7-473d-8cc9-f01a74f10af4-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-m64kn\" (UID: \"06fa41c7-1ab7-473d-8cc9-f01a74f10af4\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.100610 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.147161 4857 generic.go:334] "Generic (PLEG): container finished" podID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerID="45dc3f8d217338ab84b73a4924b457e6a90ab4dca567936df0e59efb067c6d98" exitCode=0 Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.148173 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerDied","Data":"45dc3f8d217338ab84b73a4924b457e6a90ab4dca567936df0e59efb067c6d98"} Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.156678 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-wqb6t" event={"ID":"0b2122df-d225-43c8-87ec-38dc6c7ad5e5","Type":"ContainerStarted","Data":"47d471708487ac698abcc281d24a27177d484cdb489fd3fedc4e86b0e7f9ad83"} Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.160127 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerStarted","Data":"4f815b6c7ef00ae11dc45cbd88ebbf109b0e47be90a954d90d956658e936e4e2"} Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.167191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerStarted","Data":"87f46a070da43c294175b32a2b4fa5450b8b583bb6a8f9a4cece4e7c400adf7a"} Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.187330 4857 generic.go:334] "Generic (PLEG): container finished" podID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerID="18a897a2ff22824c07f31743f0f9551442cf0db263066112f3222f3f709d4d30" exitCode=0 Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.187404 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerDied","Data":"18a897a2ff22824c07f31743f0f9551442cf0db263066112f3222f3f709d4d30"} Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.374477 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.381311 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2mn2n\" (UID: \"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.787949 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.822270 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.825766 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.829136 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.829176 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-zwmbs" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.831118 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.833255 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.833541 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.835231 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.844359 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.844836 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.849335 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.850100 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.863282 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn"] Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.994673 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.994771 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.994992 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995187 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995266 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvknl\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-kube-api-access-bvknl\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995402 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-out\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995624 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-volume\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995680 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-web-config\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:32 crc kubenswrapper[4857]: I0318 14:08:32.995716 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.100733 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-web-config\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.100840 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.100944 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101000 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101070 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101120 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101148 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvknl\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-kube-api-access-bvknl\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101264 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-out\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101307 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101333 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.101362 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-volume\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.102405 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.112273 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.115869 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.120583 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.125445 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-out\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.128521 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.130073 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.132197 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.136581 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.139294 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-config-volume\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.139947 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-web-config\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.145380 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvknl\" (UniqueName: \"kubernetes.io/projected/b1788d9e-4723-4fd4-9a0c-b2303b79cd3d-kube-api-access-bvknl\") pod \"alertmanager-main-0\" (UID: \"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.171936 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.210613 4857 generic.go:334] "Generic (PLEG): container finished" podID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerID="4f815b6c7ef00ae11dc45cbd88ebbf109b0e47be90a954d90d956658e936e4e2" exitCode=0 Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.210740 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerDied","Data":"4f815b6c7ef00ae11dc45cbd88ebbf109b0e47be90a954d90d956658e936e4e2"} Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.220049 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" event={"ID":"06fa41c7-1ab7-473d-8cc9-f01a74f10af4","Type":"ContainerStarted","Data":"71f3812894e107b34e088f89835c905871e1dac3debe43550133e37d434c0d1b"} Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.472435 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n"] Mar 18 14:08:33 crc kubenswrapper[4857]: I0318 14:08:33.802797 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.235718 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerStarted","Data":"651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f"} Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.239561 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerStarted","Data":"7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275"} Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.242040 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" event={"ID":"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19","Type":"ContainerStarted","Data":"b98cbcbe36fa592dac3ae720d7b51c0080ede9aceda73e0c6c570d1a37bd49f1"} Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.242071 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" event={"ID":"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19","Type":"ContainerStarted","Data":"633d9eb290a7fe9346e45235f389aa9048404c23a2075827cf3e77e8d53a9c57"} Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.247355 4857 generic.go:334] "Generic (PLEG): container finished" podID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerID="87f46a070da43c294175b32a2b4fa5450b8b583bb6a8f9a4cece4e7c400adf7a" exitCode=0 Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.247455 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerDied","Data":"87f46a070da43c294175b32a2b4fa5450b8b583bb6a8f9a4cece4e7c400adf7a"} Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.267570 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9sl8" podStartSLOduration=4.47401847 podStartE2EDuration="7.267501433s" podCreationTimestamp="2026-03-18 14:08:27 +0000 UTC" firstStartedPulling="2026-03-18 14:08:30.112593759 +0000 UTC m=+494.241722256" lastFinishedPulling="2026-03-18 14:08:32.906076762 +0000 UTC m=+497.035205219" observedRunningTime="2026-03-18 14:08:34.264848775 +0000 UTC m=+498.393977232" watchObservedRunningTime="2026-03-18 14:08:34.267501433 +0000 UTC m=+498.396629890" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.294435 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zl78l" podStartSLOduration=3.481700909 podStartE2EDuration="6.294402477s" podCreationTimestamp="2026-03-18 14:08:28 +0000 UTC" firstStartedPulling="2026-03-18 14:08:30.120266767 +0000 UTC m=+494.249395234" lastFinishedPulling="2026-03-18 14:08:32.932968345 +0000 UTC m=+497.062096802" observedRunningTime="2026-03-18 14:08:34.290155817 +0000 UTC m=+498.419284274" watchObservedRunningTime="2026-03-18 14:08:34.294402477 +0000 UTC m=+498.423530934" Mar 18 14:08:34 crc kubenswrapper[4857]: W0318 14:08:34.488571 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1788d9e_4723_4fd4_9a0c_b2303b79cd3d.slice/crio-a375e074a9febf6213ac485843991c1f7af43ffa5c510634ebd222e69a549eb7 WatchSource:0}: Error finding container a375e074a9febf6213ac485843991c1f7af43ffa5c510634ebd222e69a549eb7: Status 404 returned error can't find the container with id a375e074a9febf6213ac485843991c1f7af43ffa5c510634ebd222e69a549eb7 Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.639353 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-556796c855-jl79p"] Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.642662 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.660450 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.660645 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.660696 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.660867 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-1tsavv12sllgp" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.660982 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.661370 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.661945 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-qh26p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.666806 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-556796c855-jl79p"] Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.977972 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-grpc-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978543 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69z26\" (UniqueName: \"kubernetes.io/projected/03f7b890-bf37-439b-b604-a3190e5e8b27-kube-api-access-69z26\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978610 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978709 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978780 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7b890-bf37-439b-b604-a3190e5e8b27-metrics-client-ca\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978807 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978891 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:34 crc kubenswrapper[4857]: I0318 14:08:34.978953 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082466 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082542 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7b890-bf37-439b-b604-a3190e5e8b27-metrics-client-ca\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082565 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082606 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082627 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082687 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-grpc-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082717 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69z26\" (UniqueName: \"kubernetes.io/projected/03f7b890-bf37-439b-b604-a3190e5e8b27-kube-api-access-69z26\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.082738 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.091168 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/03f7b890-bf37-439b-b604-a3190e5e8b27-metrics-client-ca\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.091953 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.092427 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.093197 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.100734 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.101972 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-grpc-tls\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.102989 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/03f7b890-bf37-439b-b604-a3190e5e8b27-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.122466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69z26\" (UniqueName: \"kubernetes.io/projected/03f7b890-bf37-439b-b604-a3190e5e8b27-kube-api-access-69z26\") pod \"thanos-querier-556796c855-jl79p\" (UID: \"03f7b890-bf37-439b-b604-a3190e5e8b27\") " pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.272642 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"a375e074a9febf6213ac485843991c1f7af43ffa5c510634ebd222e69a549eb7"} Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.277828 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-wqb6t" event={"ID":"0b2122df-d225-43c8-87ec-38dc6c7ad5e5","Type":"ContainerStarted","Data":"33df5f2250e1e81e2c17031e465a589710d58af074efe772c682aa0c7d27e76d"} Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.285179 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerStarted","Data":"750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d"} Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.294290 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" event={"ID":"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19","Type":"ContainerStarted","Data":"4716510efba9e50d623d35a6a871c03c6a937c886cd7f888f21b9da7880826f1"} Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.314474 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:35 crc kubenswrapper[4857]: I0318 14:08:35.338386 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z72sl" podStartSLOduration=3.887941097 podStartE2EDuration="8.338335406s" podCreationTimestamp="2026-03-18 14:08:27 +0000 UTC" firstStartedPulling="2026-03-18 14:08:30.088670302 +0000 UTC m=+494.217798769" lastFinishedPulling="2026-03-18 14:08:34.539064621 +0000 UTC m=+498.668193078" observedRunningTime="2026-03-18 14:08:35.329315064 +0000 UTC m=+499.458443531" watchObservedRunningTime="2026-03-18 14:08:35.338335406 +0000 UTC m=+499.467463853" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.437159 4857 generic.go:334] "Generic (PLEG): container finished" podID="0b2122df-d225-43c8-87ec-38dc6c7ad5e5" containerID="33df5f2250e1e81e2c17031e465a589710d58af074efe772c682aa0c7d27e76d" exitCode=0 Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.439288 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-wqb6t" event={"ID":"0b2122df-d225-43c8-87ec-38dc6c7ad5e5","Type":"ContainerDied","Data":"33df5f2250e1e81e2c17031e465a589710d58af074efe772c682aa0c7d27e76d"} Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.603370 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.604933 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.630292 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.725110 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.727768 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdn25\" (UniqueName: \"kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.727873 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.727907 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.727977 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.728079 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.728146 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.829989 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830083 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830136 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdn25\" (UniqueName: \"kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830173 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830191 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830225 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.830245 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.831505 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.832456 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.833123 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:36 crc kubenswrapper[4857]: I0318 14:08:36.833352 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:36.839884 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.053384 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.105343 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdn25\" (UniqueName: \"kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25\") pod \"console-858d4f646b-vqg6z\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.239986 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6f67489d6c-zwgbg"] Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.242228 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6f67489d6c-zwgbg"] Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.242617 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.261780 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.262215 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.262449 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-258doane252h8" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.262890 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9shsn" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.263106 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.263229 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.294293 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.296473 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5"] Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.297614 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.300444 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.302504 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.304176 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5"] Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.368943 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxchh\" (UniqueName: \"kubernetes.io/projected/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-kube-api-access-sxchh\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369059 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-metrics-server-audit-profiles\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369127 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-client-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369162 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369187 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-client-certs\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369214 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-server-tls\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.369242 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-audit-log\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.395978 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-556796c855-jl79p"] Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.450643 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-wqb6t" event={"ID":"0b2122df-d225-43c8-87ec-38dc6c7ad5e5","Type":"ContainerStarted","Data":"2aff0303ee34c9b5abcfca9c681e5cb5cb01999fe0d044916831b89ed0cd7b6c"} Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.462735 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerStarted","Data":"408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb"} Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471387 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-audit-log\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471505 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxchh\" (UniqueName: \"kubernetes.io/projected/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-kube-api-access-sxchh\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471588 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-metrics-server-audit-profiles\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471652 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-client-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471689 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.471723 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-client-certs\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.472203 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-server-tls\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.472257 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9ae4cfa8-f423-4706-89fa-5d87eec3340c-monitoring-plugin-cert\") pod \"monitoring-plugin-7fb469cf8-28cd5\" (UID: \"9ae4cfa8-f423-4706-89fa-5d87eec3340c\") " pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.472703 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-audit-log\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.473623 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-metrics-server-audit-profiles\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.473781 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.481102 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" event={"ID":"06fa41c7-1ab7-473d-8cc9-f01a74f10af4","Type":"ContainerStarted","Data":"cbf93d1805a0fa101a0c3cefe3b08b9e414fef44b421449aee05f5faba9784df"} Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.482981 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-client-ca-bundle\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.483357 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-server-tls\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.496673 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7qbr" podStartSLOduration=4.38714307 podStartE2EDuration="9.496645613s" podCreationTimestamp="2026-03-18 14:08:28 +0000 UTC" firstStartedPulling="2026-03-18 14:08:30.098163267 +0000 UTC m=+494.227291764" lastFinishedPulling="2026-03-18 14:08:35.20766585 +0000 UTC m=+499.336794307" observedRunningTime="2026-03-18 14:08:37.495359649 +0000 UTC m=+501.624488106" watchObservedRunningTime="2026-03-18 14:08:37.496645613 +0000 UTC m=+501.625774070" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.497962 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-secret-metrics-client-certs\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.500103 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxchh\" (UniqueName: \"kubernetes.io/projected/bc2369f0-d23b-4453-a74c-f8581c9f5cc0-kube-api-access-sxchh\") pod \"metrics-server-6f67489d6c-zwgbg\" (UID: \"bc2369f0-d23b-4453-a74c-f8581c9f5cc0\") " pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.573725 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9ae4cfa8-f423-4706-89fa-5d87eec3340c-monitoring-plugin-cert\") pod \"monitoring-plugin-7fb469cf8-28cd5\" (UID: \"9ae4cfa8-f423-4706-89fa-5d87eec3340c\") " pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.580584 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9ae4cfa8-f423-4706-89fa-5d87eec3340c-monitoring-plugin-cert\") pod \"monitoring-plugin-7fb469cf8-28cd5\" (UID: \"9ae4cfa8-f423-4706-89fa-5d87eec3340c\") " pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.617967 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:37 crc kubenswrapper[4857]: I0318 14:08:37.645571 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.532470 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.541119 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.543879 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.547191 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.548188 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.548269 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.573383 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"629f8f92e7c3436cec0f2c16a4fb1868de4ca59bc43c82b46ee6b059fd54e56d"} Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.629460 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.654305 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.657411 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.666779 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.667284 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.667500 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-dcdjp" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.668717 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.669004 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.669424 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.669686 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.669897 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.669922 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.670229 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.670298 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-760jgnrl236t0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.670476 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.675885 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.692401 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.764071 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.765715 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.831307 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847472 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847552 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847591 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847636 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847658 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847687 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847712 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847737 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847794 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847857 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847887 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-web-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.847969 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f62rp\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-kube-api-access-f62rp\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848012 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config-out\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848056 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848113 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848132 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848148 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.848163 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949670 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949766 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949830 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949860 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949909 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949936 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.949964 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950006 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950033 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950068 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950103 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950137 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950171 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950198 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950227 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-web-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950252 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f62rp\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-kube-api-access-f62rp\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950279 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config-out\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.950307 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.951497 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.972496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.973329 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.974018 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config-out\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.974194 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.974259 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.974583 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.974721 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.975861 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-web-config\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.976834 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.978714 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e6af46e-8f86-4122-bdaf-8ccec1a76775-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.979042 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.979997 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.980554 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.981109 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.983000 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.983658 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/0e6af46e-8f86-4122-bdaf-8ccec1a76775-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:38 crc kubenswrapper[4857]: I0318 14:08:38.996639 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f62rp\" (UniqueName: \"kubernetes.io/projected/0e6af46e-8f86-4122-bdaf-8ccec1a76775-kube-api-access-f62rp\") pod \"prometheus-k8s-0\" (UID: \"0e6af46e-8f86-4122-bdaf-8ccec1a76775\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.006469 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.621429 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 14:08:39 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:08:39 crc kubenswrapper[4857]: > Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.638282 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z72sl" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" probeResult="failure" output=< Mar 18 14:08:39 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:08:39 crc kubenswrapper[4857]: > Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.666980 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.674214 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 14:08:39 crc kubenswrapper[4857]: I0318 14:08:39.768921 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:08:39 crc kubenswrapper[4857]: W0318 14:08:39.780522 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fba28b5_6fea_492d_9f32_6115f70b078c.slice/crio-e21026ade098c619803ff8f603f83f2d346c3f702c6a2f28e9207294547e7032 WatchSource:0}: Error finding container e21026ade098c619803ff8f603f83f2d346c3f702c6a2f28e9207294547e7032: Status 404 returned error can't find the container with id e21026ade098c619803ff8f603f83f2d346c3f702c6a2f28e9207294547e7032 Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.080590 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6f67489d6c-zwgbg"] Mar 18 14:08:40 crc kubenswrapper[4857]: W0318 14:08:40.090159 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2369f0_d23b_4453_a74c_f8581c9f5cc0.slice/crio-dbb20529d08ebb6b670383bb0a5c8a357b91b037042a212f7c5f96ffa92496ea WatchSource:0}: Error finding container dbb20529d08ebb6b670383bb0a5c8a357b91b037042a212f7c5f96ffa92496ea: Status 404 returned error can't find the container with id dbb20529d08ebb6b670383bb0a5c8a357b91b037042a212f7c5f96ffa92496ea Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.182248 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5"] Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.188076 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 14:08:40 crc kubenswrapper[4857]: W0318 14:08:40.193221 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e6af46e_8f86_4122_bdaf_8ccec1a76775.slice/crio-8935df3d3a3109e5c3fce8db1e534b60b38e1614fcb679a16b99613d8190a7ba WatchSource:0}: Error finding container 8935df3d3a3109e5c3fce8db1e534b60b38e1614fcb679a16b99613d8190a7ba: Status 404 returned error can't find the container with id 8935df3d3a3109e5c3fce8db1e534b60b38e1614fcb679a16b99613d8190a7ba Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.596496 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"5c93a817345d7d578cafc2233028f81d0a25dbaf47fbc6b456a4bbcc9484e0ee"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.600232 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" event={"ID":"bc2369f0-d23b-4453-a74c-f8581c9f5cc0","Type":"ContainerStarted","Data":"dbb20529d08ebb6b670383bb0a5c8a357b91b037042a212f7c5f96ffa92496ea"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.610969 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-wqb6t" event={"ID":"0b2122df-d225-43c8-87ec-38dc6c7ad5e5","Type":"ContainerStarted","Data":"52936725d37912ec79f603e02f6263a78646c953345a11085bac6dbbda400ed7"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.612982 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"8935df3d3a3109e5c3fce8db1e534b60b38e1614fcb679a16b99613d8190a7ba"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.617837 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-vqg6z" event={"ID":"9fba28b5-6fea-492d-9f32-6115f70b078c","Type":"ContainerStarted","Data":"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.617910 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-vqg6z" event={"ID":"9fba28b5-6fea-492d-9f32-6115f70b078c","Type":"ContainerStarted","Data":"e21026ade098c619803ff8f603f83f2d346c3f702c6a2f28e9207294547e7032"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.620365 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" event={"ID":"06fa41c7-1ab7-473d-8cc9-f01a74f10af4","Type":"ContainerStarted","Data":"af8d8cf998c7252717762fc05e12d3c254949ec18e1dfe4b4f62446ef36c2ecf"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.621316 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" event={"ID":"9ae4cfa8-f423-4706-89fa-5d87eec3340c","Type":"ContainerStarted","Data":"a6b3859738fb01a771b5eddbdf3933f9d3d5b77484787682fcb2ed59b26920d0"} Mar 18 14:08:40 crc kubenswrapper[4857]: I0318 14:08:40.624428 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" event={"ID":"e241769f-a7ba-4ab6-9aa9-5a1a65eb4c19","Type":"ContainerStarted","Data":"ab05a67f0b94aafcd34dbe09a79d16acf3967894931ffb9203222cc694995de1"} Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.635787 4857 generic.go:334] "Generic (PLEG): container finished" podID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerID="9951ec9c23cb4d6c6082587eafc2534e1bf6b12a006abe8c9ccf23a3ab7ee7f5" exitCode=0 Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.635893 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerDied","Data":"9951ec9c23cb4d6c6082587eafc2534e1bf6b12a006abe8c9ccf23a3ab7ee7f5"} Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.647791 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" event={"ID":"06fa41c7-1ab7-473d-8cc9-f01a74f10af4","Type":"ContainerStarted","Data":"eafb8cd11b6e8316776dc498884965f33694ed9ce0813d587eb082647ffde5cf"} Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.650498 4857 generic.go:334] "Generic (PLEG): container finished" podID="b1788d9e-4723-4fd4-9a0c-b2303b79cd3d" containerID="5c93a817345d7d578cafc2233028f81d0a25dbaf47fbc6b456a4bbcc9484e0ee" exitCode=0 Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.650685 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerDied","Data":"5c93a817345d7d578cafc2233028f81d0a25dbaf47fbc6b456a4bbcc9484e0ee"} Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.733859 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-858d4f646b-vqg6z" podStartSLOduration=5.733807237 podStartE2EDuration="5.733807237s" podCreationTimestamp="2026-03-18 14:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:08:41.703617189 +0000 UTC m=+505.832745666" watchObservedRunningTime="2026-03-18 14:08:41.733807237 +0000 UTC m=+505.862935694" Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.746309 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-m64kn" podStartSLOduration=7.218644138 podStartE2EDuration="10.746274408s" podCreationTimestamp="2026-03-18 14:08:31 +0000 UTC" firstStartedPulling="2026-03-18 14:08:32.955054434 +0000 UTC m=+497.084182891" lastFinishedPulling="2026-03-18 14:08:36.482684704 +0000 UTC m=+500.611813161" observedRunningTime="2026-03-18 14:08:41.73472146 +0000 UTC m=+505.863849927" watchObservedRunningTime="2026-03-18 14:08:41.746274408 +0000 UTC m=+505.875402865" Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.784262 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-wqb6t" podStartSLOduration=8.269737942 podStartE2EDuration="10.784230396s" podCreationTimestamp="2026-03-18 14:08:31 +0000 UTC" firstStartedPulling="2026-03-18 14:08:32.08135637 +0000 UTC m=+496.210484827" lastFinishedPulling="2026-03-18 14:08:34.595848834 +0000 UTC m=+498.724977281" observedRunningTime="2026-03-18 14:08:41.766442108 +0000 UTC m=+505.895570565" watchObservedRunningTime="2026-03-18 14:08:41.784230396 +0000 UTC m=+505.913358853" Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.851189 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2mn2n" podStartSLOduration=5.955488529 podStartE2EDuration="10.85113018s" podCreationTimestamp="2026-03-18 14:08:31 +0000 UTC" firstStartedPulling="2026-03-18 14:08:34.693823649 +0000 UTC m=+498.822952116" lastFinishedPulling="2026-03-18 14:08:39.58946531 +0000 UTC m=+503.718593767" observedRunningTime="2026-03-18 14:08:41.808671336 +0000 UTC m=+505.937799793" watchObservedRunningTime="2026-03-18 14:08:41.85113018 +0000 UTC m=+505.980258657" Mar 18 14:08:41 crc kubenswrapper[4857]: I0318 14:08:41.921472 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vs9hw" Mar 18 14:08:42 crc kubenswrapper[4857]: I0318 14:08:42.004473 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:08:47 crc kubenswrapper[4857]: I0318 14:08:47.296630 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:47 crc kubenswrapper[4857]: I0318 14:08:47.297592 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:47 crc kubenswrapper[4857]: I0318 14:08:47.302815 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:47 crc kubenswrapper[4857]: I0318 14:08:47.714938 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:08:47 crc kubenswrapper[4857]: I0318 14:08:47.790435 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:08:48 crc kubenswrapper[4857]: I0318 14:08:48.564327 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:48 crc kubenswrapper[4857]: I0318 14:08:48.571187 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:48 crc kubenswrapper[4857]: I0318 14:08:48.619604 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 14:08:48 crc kubenswrapper[4857]: I0318 14:08:48.634994 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.731834 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" event={"ID":"bc2369f0-d23b-4453-a74c-f8581c9f5cc0","Type":"ContainerStarted","Data":"783ccda3034bcd4060228c662b4bc26ab6b3a9b1ea6187056fac74f230912fb1"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.739958 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"fb71addf2e2d1f15bb772bce06f9a240baf707f9fc6092e783ceeb08718b4933"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.740021 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"977bf86b8fcf378a23d2053d74791493c9ae15cfbe8a4ffdaabb4ada6123bf6a"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.754874 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"c64ec88ee6a161862f35f0631d3d68439eaaf75bea67f75c0eff575dc4ed9f34"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.754955 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"f4cbc1eefb389f9cfd5a3dcea7756e50ebca007ab52ef0e9cae078b6ce1eec49"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.763607 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podStartSLOduration=3.9083357850000002 podStartE2EDuration="12.763586979s" podCreationTimestamp="2026-03-18 14:08:37 +0000 UTC" firstStartedPulling="2026-03-18 14:08:40.095223463 +0000 UTC m=+504.224352120" lastFinishedPulling="2026-03-18 14:08:48.950474857 +0000 UTC m=+513.079603314" observedRunningTime="2026-03-18 14:08:49.76284992 +0000 UTC m=+513.891978377" watchObservedRunningTime="2026-03-18 14:08:49.763586979 +0000 UTC m=+513.892715426" Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.763803 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" event={"ID":"9ae4cfa8-f423-4706-89fa-5d87eec3340c","Type":"ContainerStarted","Data":"5d5bc412c31595743196c48d18f52c094ab7e28eeb5dda0adef29d69b70fc387"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.765483 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.784704 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"6558f8f25dc63d8b2b36288705dc9c711d5f1380a9e7765ab8f3c48e20fa7171"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.784765 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"ed159a6ce8d3a00488715d5424f4ca8edf9b083ad088de4013fbf84ba8f8f4ba"} Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.785802 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.797779 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" podStartSLOduration=4.752800555 podStartE2EDuration="12.797737569s" podCreationTimestamp="2026-03-18 14:08:37 +0000 UTC" firstStartedPulling="2026-03-18 14:08:40.187179232 +0000 UTC m=+504.316307689" lastFinishedPulling="2026-03-18 14:08:48.232116246 +0000 UTC m=+512.361244703" observedRunningTime="2026-03-18 14:08:49.794498445 +0000 UTC m=+513.923626912" watchObservedRunningTime="2026-03-18 14:08:49.797737569 +0000 UTC m=+513.926866026" Mar 18 14:08:49 crc kubenswrapper[4857]: I0318 14:08:49.938139 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.797267 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"9da256d832a64394fd9f80c45cd7d852208005064c1c6b3fe5b371483e3aae83"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.797318 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"9be33f1944c12c31873e3d4426c07657f2923067758dda8c43f225bd62d7677e"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.797334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"8147f37dd849c303780aeabd763b833d48528fe19f9ff6ecd11a5e90ea8e5819"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.801329 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"47ab56e4f8f6816b17b4e29770405320f63a26165d285db79c40a22bd46a9269"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.814896 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"817522e684e2e5fdf971ee4be4a3db07705306a51cceeb9f3ee924d986b812ad"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.814955 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"ce9051b7f3c70cd365176a15ec8a7f2451881357c638860deeba0e3b6d183682"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.814970 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"a6c7281516264904425f48ee1df64c33cf3be2f3287381e672075b0b2a0faee8"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.814983 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"0e6af46e-8f86-4122-bdaf-8ccec1a76775","Type":"ContainerStarted","Data":"3f69ce186fb58fab72a55c19a259139d3653c217deaa85a5780d2f68bd9258dd"} Mar 18 14:08:50 crc kubenswrapper[4857]: I0318 14:08:50.852517 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.532787443 podStartE2EDuration="12.852483778s" podCreationTimestamp="2026-03-18 14:08:38 +0000 UTC" firstStartedPulling="2026-03-18 14:08:41.638549112 +0000 UTC m=+505.767677559" lastFinishedPulling="2026-03-18 14:08:48.958245437 +0000 UTC m=+513.087373894" observedRunningTime="2026-03-18 14:08:50.851376629 +0000 UTC m=+514.980505106" watchObservedRunningTime="2026-03-18 14:08:50.852483778 +0000 UTC m=+514.981612235" Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.828119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b1788d9e-4723-4fd4-9a0c-b2303b79cd3d","Type":"ContainerStarted","Data":"d029b7ff4ebca25660a4a33d805dbb2ed79787d8f7afa579051287cc229f7f8c"} Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.836015 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"05011a23cb648b29f75d89091eed7e5110bd615ffcf14484038116843a02369a"} Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.836072 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"76a1f09a0c46bb416d9cedb1cd5790e4395444086003a5054dddcbda9903cf69"} Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.836088 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" event={"ID":"03f7b890-bf37-439b-b604-a3190e5e8b27","Type":"ContainerStarted","Data":"78cd797fc80522e6c6bf02d02a28bc7ceb77f842cf6c23b7b87a581fbde83c40"} Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.870666 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.0629041 podStartE2EDuration="19.870641414s" podCreationTimestamp="2026-03-18 14:08:32 +0000 UTC" firstStartedPulling="2026-03-18 14:08:34.50137591 +0000 UTC m=+498.630504367" lastFinishedPulling="2026-03-18 14:08:51.309113224 +0000 UTC m=+515.438241681" observedRunningTime="2026-03-18 14:08:51.862003832 +0000 UTC m=+515.991132289" watchObservedRunningTime="2026-03-18 14:08:51.870641414 +0000 UTC m=+515.999769871" Mar 18 14:08:51 crc kubenswrapper[4857]: I0318 14:08:51.911024 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podStartSLOduration=4.95140896 podStartE2EDuration="17.910986014s" podCreationTimestamp="2026-03-18 14:08:34 +0000 UTC" firstStartedPulling="2026-03-18 14:08:38.220250299 +0000 UTC m=+502.349378756" lastFinishedPulling="2026-03-18 14:08:51.179827343 +0000 UTC m=+515.308955810" observedRunningTime="2026-03-18 14:08:51.906397476 +0000 UTC m=+516.035525933" watchObservedRunningTime="2026-03-18 14:08:51.910986014 +0000 UTC m=+516.040114491" Mar 18 14:08:52 crc kubenswrapper[4857]: I0318 14:08:52.845243 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:54 crc kubenswrapper[4857]: I0318 14:08:54.007490 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:08:55 crc kubenswrapper[4857]: I0318 14:08:55.325487 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.039701 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.039888 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.039981 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.041198 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.041327 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f" gracePeriod=600 Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.619548 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.620601 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.892870 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f" exitCode=0 Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.893136 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f"} Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.894236 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1"} Mar 18 14:08:57 crc kubenswrapper[4857]: I0318 14:08:57.894305 4857 scope.go:117] "RemoveContainer" containerID="482ddcef556e6723cd02e8e32ceb3c651d4bb5dce5f58a3fec35353e1d218839" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.085484 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" podUID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" containerName="registry" containerID="cri-o://d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6" gracePeriod=30 Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.529243 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828386 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828468 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828504 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828567 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtrzz\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828839 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828864 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828927 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.828953 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token\") pod \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\" (UID: \"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3\") " Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.830192 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.831046 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.837491 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.838461 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.839125 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz" (OuterVolumeSpecName: "kube-api-access-dtrzz") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "kube-api-access-dtrzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.839590 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.852917 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.853643 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" (UID: "5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930256 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtrzz\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-kube-api-access-dtrzz\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930301 4857 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930316 4857 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930327 4857 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930338 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930348 4857 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:07 crc kubenswrapper[4857]: I0318 14:09:07.930358 4857 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.187963 4857 generic.go:334] "Generic (PLEG): container finished" podID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" containerID="d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6" exitCode=0 Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.188044 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" event={"ID":"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3","Type":"ContainerDied","Data":"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6"} Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.188111 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.188328 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fh2dj" event={"ID":"5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3","Type":"ContainerDied","Data":"61b53ea0ede3cc5b82c0fb0fb28784664d96b9e834ef34ffd6f260589f85b7cc"} Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.188362 4857 scope.go:117] "RemoveContainer" containerID="d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6" Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.218569 4857 scope.go:117] "RemoveContainer" containerID="d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6" Mar 18 14:09:08 crc kubenswrapper[4857]: E0318 14:09:08.219674 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6\": container with ID starting with d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6 not found: ID does not exist" containerID="d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6" Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.219770 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6"} err="failed to get container status \"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6\": rpc error: code = NotFound desc = could not find container \"d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6\": container with ID starting with d3f84dcbfa506849fc81620be492d1121d802292ee95a0ec510d03ce7905fca6 not found: ID does not exist" Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.253002 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:09:08 crc kubenswrapper[4857]: I0318 14:09:08.263007 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fh2dj"] Mar 18 14:09:09 crc kubenswrapper[4857]: I0318 14:09:09.199331 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" path="/var/lib/kubelet/pods/5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3/volumes" Mar 18 14:09:12 crc kubenswrapper[4857]: I0318 14:09:12.838013 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4bqqp" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" containerID="cri-o://a3ff79df1f1d26be30d755dc04aa22f5812de66f157cd80c3edbfdf837a3a019" gracePeriod=15 Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.246303 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bqqp_35ee9206-490f-4303-9ee7-198148cb3227/console/0.log" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.246585 4857 generic.go:334] "Generic (PLEG): container finished" podID="35ee9206-490f-4303-9ee7-198148cb3227" containerID="a3ff79df1f1d26be30d755dc04aa22f5812de66f157cd80c3edbfdf837a3a019" exitCode=2 Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.246625 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bqqp" event={"ID":"35ee9206-490f-4303-9ee7-198148cb3227","Type":"ContainerDied","Data":"a3ff79df1f1d26be30d755dc04aa22f5812de66f157cd80c3edbfdf837a3a019"} Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.292538 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bqqp_35ee9206-490f-4303-9ee7-198148cb3227/console/0.log" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.292643 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.463814 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.463864 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.463941 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.464077 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.464140 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.464168 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmp6j\" (UniqueName: \"kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.464205 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca\") pod \"35ee9206-490f-4303-9ee7-198148cb3227\" (UID: \"35ee9206-490f-4303-9ee7-198148cb3227\") " Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.465470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca" (OuterVolumeSpecName: "service-ca") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.465501 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.465571 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config" (OuterVolumeSpecName: "console-config") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.465563 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.477626 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.482220 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j" (OuterVolumeSpecName: "kube-api-access-pmp6j") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "kube-api-access-pmp6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.483225 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "35ee9206-490f-4303-9ee7-198148cb3227" (UID: "35ee9206-490f-4303-9ee7-198148cb3227"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.565974 4857 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566011 4857 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566022 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmp6j\" (UniqueName: \"kubernetes.io/projected/35ee9206-490f-4303-9ee7-198148cb3227-kube-api-access-pmp6j\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566034 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566043 4857 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35ee9206-490f-4303-9ee7-198148cb3227-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566051 4857 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-console-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:13 crc kubenswrapper[4857]: I0318 14:09:13.566060 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35ee9206-490f-4303-9ee7-198148cb3227-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.470417 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4bqqp_35ee9206-490f-4303-9ee7-198148cb3227/console/0.log" Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.470805 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4bqqp" event={"ID":"35ee9206-490f-4303-9ee7-198148cb3227","Type":"ContainerDied","Data":"1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9"} Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.470859 4857 scope.go:117] "RemoveContainer" containerID="a3ff79df1f1d26be30d755dc04aa22f5812de66f157cd80c3edbfdf837a3a019" Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.471080 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4bqqp" Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.524162 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:09:14 crc kubenswrapper[4857]: I0318 14:09:14.530407 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4bqqp"] Mar 18 14:09:14 crc kubenswrapper[4857]: E0318 14:09:14.624829 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ee9206_490f_4303_9ee7_198148cb3227.slice/crio-1fa3c772a526129946b6d5f4147a37c82ac90bd91a19d5802c01245295914fa9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35ee9206_490f_4303_9ee7_198148cb3227.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:09:15 crc kubenswrapper[4857]: I0318 14:09:15.171884 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35ee9206-490f-4303-9ee7-198148cb3227" path="/var/lib/kubelet/pods/35ee9206-490f-4303-9ee7-198148cb3227/volumes" Mar 18 14:09:17 crc kubenswrapper[4857]: I0318 14:09:17.626512 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:09:17 crc kubenswrapper[4857]: I0318 14:09:17.634273 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 14:09:39 crc kubenswrapper[4857]: I0318 14:09:39.007997 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:09:39 crc kubenswrapper[4857]: I0318 14:09:39.046105 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:09:40 crc kubenswrapper[4857]: I0318 14:09:40.005807 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.378806 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:09:56 crc kubenswrapper[4857]: E0318 14:09:56.379820 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" containerName="registry" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.379860 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" containerName="registry" Mar 18 14:09:56 crc kubenswrapper[4857]: E0318 14:09:56.379882 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.379891 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.380106 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d62c7ff-dd5b-4270-8ece-4a5198c9f6c3" containerName="registry" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.380131 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ee9206-490f-4303-9ee7-198148cb3227" containerName="console" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.380798 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.397410 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483136 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483208 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483286 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483329 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483360 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjbm\" (UniqueName: \"kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483389 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.483452 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.585615 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586019 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586166 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586282 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586441 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586579 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.586700 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjbm\" (UniqueName: \"kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.587250 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.587460 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.587544 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.588509 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.593414 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.593479 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.605444 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjbm\" (UniqueName: \"kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm\") pod \"console-fc6d67c6b-2tvtn\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:56 crc kubenswrapper[4857]: I0318 14:09:56.698902 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:09:57 crc kubenswrapper[4857]: I0318 14:09:57.213254 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:09:58 crc kubenswrapper[4857]: I0318 14:09:58.103692 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fc6d67c6b-2tvtn" event={"ID":"ac56357b-0b65-400a-88ee-cde8cbb3194d","Type":"ContainerStarted","Data":"a6b8338eef01ad8a9ec5b9697c425b8c57c7fae9edd46b9791ac996c091bbe7b"} Mar 18 14:09:58 crc kubenswrapper[4857]: I0318 14:09:58.104228 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fc6d67c6b-2tvtn" event={"ID":"ac56357b-0b65-400a-88ee-cde8cbb3194d","Type":"ContainerStarted","Data":"3785d4d45010fc221555799cddff6becd47b48467efd1c170939ed84a452b3c8"} Mar 18 14:09:58 crc kubenswrapper[4857]: I0318 14:09:58.128858 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fc6d67c6b-2tvtn" podStartSLOduration=2.128830215 podStartE2EDuration="2.128830215s" podCreationTimestamp="2026-03-18 14:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:09:58.123201671 +0000 UTC m=+582.252330188" watchObservedRunningTime="2026-03-18 14:09:58.128830215 +0000 UTC m=+582.257958712" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.156205 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564050-gtkz8"] Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.157834 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.161121 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.162176 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.164361 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564050-gtkz8"] Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.165389 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.249356 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsxs\" (UniqueName: \"kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs\") pod \"auto-csr-approver-29564050-gtkz8\" (UID: \"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0\") " pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.350559 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsxs\" (UniqueName: \"kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs\") pod \"auto-csr-approver-29564050-gtkz8\" (UID: \"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0\") " pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.371519 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsxs\" (UniqueName: \"kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs\") pod \"auto-csr-approver-29564050-gtkz8\" (UID: \"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0\") " pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:00 crc kubenswrapper[4857]: I0318 14:10:00.480426 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:01 crc kubenswrapper[4857]: I0318 14:10:01.436920 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564050-gtkz8"] Mar 18 14:10:01 crc kubenswrapper[4857]: W0318 14:10:01.447021 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94c30561_7c47_4b3f_a3e8_4ff8f0c486a0.slice/crio-afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41 WatchSource:0}: Error finding container afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41: Status 404 returned error can't find the container with id afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41 Mar 18 14:10:02 crc kubenswrapper[4857]: I0318 14:10:02.142389 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" event={"ID":"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0","Type":"ContainerStarted","Data":"afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41"} Mar 18 14:10:03 crc kubenswrapper[4857]: I0318 14:10:03.151163 4857 generic.go:334] "Generic (PLEG): container finished" podID="94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" containerID="c3000da080b60efe725aa03dd4a88c2301a95af190e4ee4f82ba75379c1f6764" exitCode=0 Mar 18 14:10:03 crc kubenswrapper[4857]: I0318 14:10:03.151244 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" event={"ID":"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0","Type":"ContainerDied","Data":"c3000da080b60efe725aa03dd4a88c2301a95af190e4ee4f82ba75379c1f6764"} Mar 18 14:10:04 crc kubenswrapper[4857]: I0318 14:10:04.394078 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:04 crc kubenswrapper[4857]: I0318 14:10:04.477536 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsxs\" (UniqueName: \"kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs\") pod \"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0\" (UID: \"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0\") " Mar 18 14:10:04 crc kubenswrapper[4857]: I0318 14:10:04.482444 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs" (OuterVolumeSpecName: "kube-api-access-wlsxs") pod "94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" (UID: "94c30561-7c47-4b3f-a3e8-4ff8f0c486a0"). InnerVolumeSpecName "kube-api-access-wlsxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:10:04 crc kubenswrapper[4857]: I0318 14:10:04.580274 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsxs\" (UniqueName: \"kubernetes.io/projected/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0-kube-api-access-wlsxs\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:05 crc kubenswrapper[4857]: I0318 14:10:05.168481 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" Mar 18 14:10:05 crc kubenswrapper[4857]: I0318 14:10:05.174086 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564050-gtkz8" event={"ID":"94c30561-7c47-4b3f-a3e8-4ff8f0c486a0","Type":"ContainerDied","Data":"afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41"} Mar 18 14:10:05 crc kubenswrapper[4857]: I0318 14:10:05.174133 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afde1e8bcc4737e6aeec11e1090f13d5109f64ae1e150fcaeb5a18b19d04cc41" Mar 18 14:10:05 crc kubenswrapper[4857]: I0318 14:10:05.464961 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564044-5r7zc"] Mar 18 14:10:05 crc kubenswrapper[4857]: I0318 14:10:05.472332 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564044-5r7zc"] Mar 18 14:10:06 crc kubenswrapper[4857]: I0318 14:10:06.699809 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:10:06 crc kubenswrapper[4857]: I0318 14:10:06.700425 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:10:06 crc kubenswrapper[4857]: I0318 14:10:06.706262 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:10:07 crc kubenswrapper[4857]: I0318 14:10:07.172432 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5933af-d25b-4d7a-8fda-e95c340a38ac" path="/var/lib/kubelet/pods/af5933af-d25b-4d7a-8fda-e95c340a38ac/volumes" Mar 18 14:10:07 crc kubenswrapper[4857]: I0318 14:10:07.185149 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:10:07 crc kubenswrapper[4857]: I0318 14:10:07.258515 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:10:28 crc kubenswrapper[4857]: I0318 14:10:28.370281 4857 scope.go:117] "RemoveContainer" containerID="6dae5536d3c6a0bf2f54814dc3271dabd1cd8f0c51eed186d35de70f725f7bcb" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.299830 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-858d4f646b-vqg6z" podUID="9fba28b5-6fea-492d-9f32-6115f70b078c" containerName="console" containerID="cri-o://6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801" gracePeriod=15 Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.951940 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-858d4f646b-vqg6z_9fba28b5-6fea-492d-9f32-6115f70b078c/console/0.log" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.952270 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.971589 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.971777 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.971817 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.971972 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.972120 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdn25\" (UniqueName: \"kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.972193 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.972224 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle\") pod \"9fba28b5-6fea-492d-9f32-6115f70b078c\" (UID: \"9fba28b5-6fea-492d-9f32-6115f70b078c\") " Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.973301 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.975447 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.976004 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config" (OuterVolumeSpecName: "console-config") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:10:32 crc kubenswrapper[4857]: I0318 14:10:32.976957 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca" (OuterVolumeSpecName: "service-ca") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.038005 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.038247 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.038310 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25" (OuterVolumeSpecName: "kube-api-access-hdn25") pod "9fba28b5-6fea-492d-9f32-6115f70b078c" (UID: "9fba28b5-6fea-492d-9f32-6115f70b078c"). InnerVolumeSpecName "kube-api-access-hdn25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074349 4857 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074422 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074434 4857 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074446 4857 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fba28b5-6fea-492d-9f32-6115f70b078c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074456 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdn25\" (UniqueName: \"kubernetes.io/projected/9fba28b5-6fea-492d-9f32-6115f70b078c-kube-api-access-hdn25\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074467 4857 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-console-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.074480 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fba28b5-6fea-492d-9f32-6115f70b078c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245311 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-858d4f646b-vqg6z_9fba28b5-6fea-492d-9f32-6115f70b078c/console/0.log" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245374 4857 generic.go:334] "Generic (PLEG): container finished" podID="9fba28b5-6fea-492d-9f32-6115f70b078c" containerID="6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801" exitCode=2 Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245414 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-vqg6z" event={"ID":"9fba28b5-6fea-492d-9f32-6115f70b078c","Type":"ContainerDied","Data":"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801"} Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245452 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-858d4f646b-vqg6z" event={"ID":"9fba28b5-6fea-492d-9f32-6115f70b078c","Type":"ContainerDied","Data":"e21026ade098c619803ff8f603f83f2d346c3f702c6a2f28e9207294547e7032"} Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245473 4857 scope.go:117] "RemoveContainer" containerID="6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.245477 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-858d4f646b-vqg6z" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.276245 4857 scope.go:117] "RemoveContainer" containerID="6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801" Mar 18 14:10:33 crc kubenswrapper[4857]: E0318 14:10:33.277079 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801\": container with ID starting with 6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801 not found: ID does not exist" containerID="6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.277127 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801"} err="failed to get container status \"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801\": rpc error: code = NotFound desc = could not find container \"6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801\": container with ID starting with 6f3fd5c241788b1aea930179989c7080b1ab219d6f5fb9061ba701bebb1f5801 not found: ID does not exist" Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.286184 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:10:33 crc kubenswrapper[4857]: I0318 14:10:33.290820 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-858d4f646b-vqg6z"] Mar 18 14:10:35 crc kubenswrapper[4857]: I0318 14:10:35.173017 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fba28b5-6fea-492d-9f32-6115f70b078c" path="/var/lib/kubelet/pods/9fba28b5-6fea-492d-9f32-6115f70b078c/volumes" Mar 18 14:10:57 crc kubenswrapper[4857]: I0318 14:10:57.039402 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:10:57 crc kubenswrapper[4857]: I0318 14:10:57.040421 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:11:27 crc kubenswrapper[4857]: I0318 14:11:27.038555 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:11:27 crc kubenswrapper[4857]: I0318 14:11:27.039124 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:11:28 crc kubenswrapper[4857]: I0318 14:11:28.437631 4857 scope.go:117] "RemoveContainer" containerID="1e19efb8d3c40e0ef0eed3cda9a3e8d62af2eb599f42fe8f33a6c18af65497af" Mar 18 14:11:28 crc kubenswrapper[4857]: I0318 14:11:28.508072 4857 scope.go:117] "RemoveContainer" containerID="4b364a8a1f55996f928000ab80476797f1d600e64460f3a63565d9f73b95965b" Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.038983 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.039608 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.039705 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.040722 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.040828 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1" gracePeriod=600 Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.810958 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1" exitCode=0 Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.811129 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1"} Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.811963 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18"} Mar 18 14:11:57 crc kubenswrapper[4857]: I0318 14:11:57.812009 4857 scope.go:117] "RemoveContainer" containerID="9717b4ec826d9d5afdc587cf60c742eaa0e0f3db09188f675b7e96dde193977f" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.173804 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564052-hpjwq"] Mar 18 14:12:00 crc kubenswrapper[4857]: E0318 14:12:00.174636 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fba28b5-6fea-492d-9f32-6115f70b078c" containerName="console" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.174652 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fba28b5-6fea-492d-9f32-6115f70b078c" containerName="console" Mar 18 14:12:00 crc kubenswrapper[4857]: E0318 14:12:00.174689 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" containerName="oc" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.174699 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" containerName="oc" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.174877 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fba28b5-6fea-492d-9f32-6115f70b078c" containerName="console" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.174910 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" containerName="oc" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.176544 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.182413 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.182914 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.187282 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.194537 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257wk\" (UniqueName: \"kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk\") pod \"auto-csr-approver-29564052-hpjwq\" (UID: \"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231\") " pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.195341 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564052-hpjwq"] Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.296165 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257wk\" (UniqueName: \"kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk\") pod \"auto-csr-approver-29564052-hpjwq\" (UID: \"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231\") " pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.325114 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257wk\" (UniqueName: \"kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk\") pod \"auto-csr-approver-29564052-hpjwq\" (UID: \"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231\") " pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.511061 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.787169 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564052-hpjwq"] Mar 18 14:12:00 crc kubenswrapper[4857]: W0318 14:12:00.791517 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29bc5ba3_9c2c_486a_b75f_ff4a4b59e231.slice/crio-086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba WatchSource:0}: Error finding container 086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba: Status 404 returned error can't find the container with id 086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba Mar 18 14:12:00 crc kubenswrapper[4857]: I0318 14:12:00.845492 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" event={"ID":"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231","Type":"ContainerStarted","Data":"086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba"} Mar 18 14:12:02 crc kubenswrapper[4857]: I0318 14:12:02.865953 4857 generic.go:334] "Generic (PLEG): container finished" podID="29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" containerID="a8bb763f8f8a08d3e6bfdfce69b5ebb116fe2f6bf550318769c53bf6c87c9686" exitCode=0 Mar 18 14:12:02 crc kubenswrapper[4857]: I0318 14:12:02.866252 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" event={"ID":"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231","Type":"ContainerDied","Data":"a8bb763f8f8a08d3e6bfdfce69b5ebb116fe2f6bf550318769c53bf6c87c9686"} Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.178426 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.364444 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257wk\" (UniqueName: \"kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk\") pod \"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231\" (UID: \"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231\") " Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.371235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk" (OuterVolumeSpecName: "kube-api-access-257wk") pod "29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" (UID: "29bc5ba3-9c2c-486a-b75f-ff4a4b59e231"). InnerVolumeSpecName "kube-api-access-257wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.468204 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257wk\" (UniqueName: \"kubernetes.io/projected/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231-kube-api-access-257wk\") on node \"crc\" DevicePath \"\"" Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.886267 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" event={"ID":"29bc5ba3-9c2c-486a-b75f-ff4a4b59e231","Type":"ContainerDied","Data":"086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba"} Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.886363 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086630aa9070c5235e6a2066fb4f35128e2f45e6470328bd76e7c762f54d83ba" Mar 18 14:12:04 crc kubenswrapper[4857]: I0318 14:12:04.886457 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564052-hpjwq" Mar 18 14:12:05 crc kubenswrapper[4857]: I0318 14:12:05.267242 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564046-slwnd"] Mar 18 14:12:05 crc kubenswrapper[4857]: I0318 14:12:05.272957 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564046-slwnd"] Mar 18 14:12:07 crc kubenswrapper[4857]: I0318 14:12:07.173712 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca33260b-e859-4d77-9509-3e08e46be7f1" path="/var/lib/kubelet/pods/ca33260b-e859-4d77-9509-3e08e46be7f1/volumes" Mar 18 14:12:28 crc kubenswrapper[4857]: I0318 14:12:28.634861 4857 scope.go:117] "RemoveContainer" containerID="af3077fed793ad6237eaf1924a36c77deb405e3ee3a5507a654bbaac519b4050" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.734498 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b"] Mar 18 14:13:08 crc kubenswrapper[4857]: E0318 14:13:08.735365 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" containerName="oc" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.735390 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" containerName="oc" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.735545 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" containerName="oc" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.736670 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.739279 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.744873 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b"] Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.845988 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.846037 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wlxl\" (UniqueName: \"kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.846058 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.947316 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.947386 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wlxl\" (UniqueName: \"kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.947416 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.948300 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.948319 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:08 crc kubenswrapper[4857]: I0318 14:13:08.969037 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wlxl\" (UniqueName: \"kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:09 crc kubenswrapper[4857]: I0318 14:13:09.061062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:09 crc kubenswrapper[4857]: I0318 14:13:09.300821 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b"] Mar 18 14:13:09 crc kubenswrapper[4857]: I0318 14:13:09.671469 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerStarted","Data":"27d5c3c0cbeb13ca2930382a98fde3fc405391fc1131af0ba7bde54e8cb5120d"} Mar 18 14:13:09 crc kubenswrapper[4857]: I0318 14:13:09.671549 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerStarted","Data":"35956bdc13444cf87e01dae1b1e10ea37d05fd1dd9be1ff23d5c20123e3bc185"} Mar 18 14:13:10 crc kubenswrapper[4857]: I0318 14:13:10.679383 4857 generic.go:334] "Generic (PLEG): container finished" podID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerID="27d5c3c0cbeb13ca2930382a98fde3fc405391fc1131af0ba7bde54e8cb5120d" exitCode=0 Mar 18 14:13:10 crc kubenswrapper[4857]: I0318 14:13:10.679457 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerDied","Data":"27d5c3c0cbeb13ca2930382a98fde3fc405391fc1131af0ba7bde54e8cb5120d"} Mar 18 14:13:12 crc kubenswrapper[4857]: I0318 14:13:12.694202 4857 generic.go:334] "Generic (PLEG): container finished" podID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerID="c227815e9fde5384bc1b865466b0292b9a704bd34c8b3ab757b89e4962bf832b" exitCode=0 Mar 18 14:13:12 crc kubenswrapper[4857]: I0318 14:13:12.694300 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerDied","Data":"c227815e9fde5384bc1b865466b0292b9a704bd34c8b3ab757b89e4962bf832b"} Mar 18 14:13:13 crc kubenswrapper[4857]: I0318 14:13:13.703723 4857 generic.go:334] "Generic (PLEG): container finished" podID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerID="cdbd929c87501748546cf90ec044eee3b44385bb57e38d6c98eb6e4ae0d6b204" exitCode=0 Mar 18 14:13:13 crc kubenswrapper[4857]: I0318 14:13:13.703802 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerDied","Data":"cdbd929c87501748546cf90ec044eee3b44385bb57e38d6c98eb6e4ae0d6b204"} Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.005436 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.251050 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wlxl\" (UniqueName: \"kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl\") pod \"a640fe72-4cc0-46a9-b835-36c8d15718ce\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.251154 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util\") pod \"a640fe72-4cc0-46a9-b835-36c8d15718ce\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.251217 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle\") pod \"a640fe72-4cc0-46a9-b835-36c8d15718ce\" (UID: \"a640fe72-4cc0-46a9-b835-36c8d15718ce\") " Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.259327 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle" (OuterVolumeSpecName: "bundle") pod "a640fe72-4cc0-46a9-b835-36c8d15718ce" (UID: "a640fe72-4cc0-46a9-b835-36c8d15718ce"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.265027 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl" (OuterVolumeSpecName: "kube-api-access-9wlxl") pod "a640fe72-4cc0-46a9-b835-36c8d15718ce" (UID: "a640fe72-4cc0-46a9-b835-36c8d15718ce"). InnerVolumeSpecName "kube-api-access-9wlxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.265436 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util" (OuterVolumeSpecName: "util") pod "a640fe72-4cc0-46a9-b835-36c8d15718ce" (UID: "a640fe72-4cc0-46a9-b835-36c8d15718ce"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.352345 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wlxl\" (UniqueName: \"kubernetes.io/projected/a640fe72-4cc0-46a9-b835-36c8d15718ce-kube-api-access-9wlxl\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.352385 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.352396 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a640fe72-4cc0-46a9-b835-36c8d15718ce-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.761135 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" event={"ID":"a640fe72-4cc0-46a9-b835-36c8d15718ce","Type":"ContainerDied","Data":"35956bdc13444cf87e01dae1b1e10ea37d05fd1dd9be1ff23d5c20123e3bc185"} Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.761352 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b" Mar 18 14:13:15 crc kubenswrapper[4857]: I0318 14:13:15.761392 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35956bdc13444cf87e01dae1b1e10ea37d05fd1dd9be1ff23d5c20123e3bc185" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.229891 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bpx9l"] Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.230973 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-controller" containerID="cri-o://42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.231629 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="sbdb" containerID="cri-o://27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.231719 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="nbdb" containerID="cri-o://a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.231861 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="northd" containerID="cri-o://43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.231930 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.231987 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-node" containerID="cri-o://584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.232046 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-acl-logging" containerID="cri-o://b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.288100 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" containerID="cri-o://c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" gracePeriod=30 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.783327 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovnkube-controller/3.log" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.787288 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-acl-logging/0.log" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.788357 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-controller/0.log" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.789324 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" exitCode=0 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.790950 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" exitCode=0 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791098 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" exitCode=0 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791215 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" exitCode=0 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791317 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" exitCode=143 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791433 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" exitCode=143 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.789330 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791670 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791706 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791728 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791747 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791845 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.791892 4857 scope.go:117] "RemoveContainer" containerID="f81523d2ebc148b7e6c6fb6c5b3a18da129cd8e2dcd7056a6dbbf0ea56e532ea" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.796003 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/2.log" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.796643 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/1.log" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.796801 4857 generic.go:334] "Generic (PLEG): container finished" podID="0ca53fe8-513c-4226-8659-208b304ffb78" containerID="45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31" exitCode=2 Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.796854 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerDied","Data":"45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31"} Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.797628 4857 scope.go:117] "RemoveContainer" containerID="45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31" Mar 18 14:13:17 crc kubenswrapper[4857]: E0318 14:13:17.798233 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bdlm5_openshift-multus(0ca53fe8-513c-4226-8659-208b304ffb78)\"" pod="openshift-multus/multus-bdlm5" podUID="0ca53fe8-513c-4226-8659-208b304ffb78" Mar 18 14:13:17 crc kubenswrapper[4857]: I0318 14:13:17.821896 4857 scope.go:117] "RemoveContainer" containerID="b48963ed9e483ccbeec10dae3b231fb180d3e35ef9ff2fd30e6d9ba89fc422ee" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.451527 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-acl-logging/0.log" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.452468 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-controller/0.log" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.453229 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536290 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gjnmh"] Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536726 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="northd" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536751 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="northd" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536787 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536796 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536805 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536813 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536822 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536830 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536839 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="extract" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536847 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="extract" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536857 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-acl-logging" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536865 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-acl-logging" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536875 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="sbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536882 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="sbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536896 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536902 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536914 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536921 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536934 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kubecfg-setup" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536941 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kubecfg-setup" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536951 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="nbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536957 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="nbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536972 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536979 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.536987 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="util" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.536994 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="util" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.537008 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-node" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537018 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-node" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.537030 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="pull" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537038 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="pull" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537167 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="northd" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537177 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="sbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537187 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537199 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a640fe72-4cc0-46a9-b835-36c8d15718ce" containerName="extract" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537209 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537227 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovn-acl-logging" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537238 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537247 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-node" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537258 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537268 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537275 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537285 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537295 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="nbdb" Mar 18 14:13:18 crc kubenswrapper[4857]: E0318 14:13:18.537457 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.537467 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerName="ovnkube-controller" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.543316 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603310 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8nhj\" (UniqueName: \"kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603395 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603510 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log" (OuterVolumeSpecName: "node-log") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603612 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603685 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603785 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603831 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.603956 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604000 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604423 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604463 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604465 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash" (OuterVolumeSpecName: "host-slash") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604490 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604514 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604559 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604586 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604613 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604639 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604659 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604679 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604694 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604713 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604732 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604797 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604833 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes\") pod \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\" (UID: \"5bdcb274-14da-4683-8c0a-0b71e2d2a16f\") " Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604905 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604912 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.604949 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605237 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605278 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605585 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605625 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket" (OuterVolumeSpecName: "log-socket") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605649 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605673 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605696 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605719 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605785 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605872 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-var-lib-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605922 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-ovn\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605955 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.605988 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr68b\" (UniqueName: \"kubernetes.io/projected/d50187d9-f94c-4f95-87f4-1065bb1d9eed-kube-api-access-jr68b\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606011 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-config\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606051 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-netd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606086 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606108 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-bin\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606132 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-systemd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606163 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-systemd-units\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606190 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606232 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-slash\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-node-log\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606297 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovn-node-metrics-cert\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606592 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-log-socket\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606630 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-env-overrides\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606670 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-etc-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606690 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-script-lib\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606731 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-kubelet\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606769 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-netns\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606835 4857 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-node-log\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606851 4857 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606864 4857 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606875 4857 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606886 4857 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-slash\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606896 4857 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606907 4857 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606919 4857 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606930 4857 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606942 4857 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606953 4857 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-log-socket\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606965 4857 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606978 4857 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.606990 4857 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.607001 4857 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.607013 4857 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.607026 4857 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.614924 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.615133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj" (OuterVolumeSpecName: "kube-api-access-g8nhj") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "kube-api-access-g8nhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.621702 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5bdcb274-14da-4683-8c0a-0b71e2d2a16f" (UID: "5bdcb274-14da-4683-8c0a-0b71e2d2a16f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708367 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-node-log\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708413 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovn-node-metrics-cert\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708444 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-log-socket\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708459 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-env-overrides\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708485 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-etc-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708499 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-script-lib\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708524 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-kubelet\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708538 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-netns\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708555 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-var-lib-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708579 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-ovn\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708687 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr68b\" (UniqueName: \"kubernetes.io/projected/d50187d9-f94c-4f95-87f4-1065bb1d9eed-kube-api-access-jr68b\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708671 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-log-socket\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-node-log\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708707 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-kubelet\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708715 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-config\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708918 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-ovn\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708993 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-var-lib-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709000 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-netns\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708867 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-etc-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.708990 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-netd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709044 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-netd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709083 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709095 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709143 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-bin\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709168 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-openvswitch\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709187 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-cni-bin\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709241 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-systemd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709291 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-run-systemd\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709351 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-systemd-units\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709450 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-slash\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709594 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8nhj\" (UniqueName: \"kubernetes.io/projected/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-kube-api-access-g8nhj\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709607 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-systemd-units\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709619 4857 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709690 4857 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5bdcb274-14da-4683-8c0a-0b71e2d2a16f-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709705 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-config\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709658 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-slash\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709690 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d50187d9-f94c-4f95-87f4-1065bb1d9eed-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709703 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-env-overrides\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.709850 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovnkube-script-lib\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.715441 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d50187d9-f94c-4f95-87f4-1065bb1d9eed-ovn-node-metrics-cert\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.733636 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr68b\" (UniqueName: \"kubernetes.io/projected/d50187d9-f94c-4f95-87f4-1065bb1d9eed-kube-api-access-jr68b\") pod \"ovnkube-node-gjnmh\" (UID: \"d50187d9-f94c-4f95-87f4-1065bb1d9eed\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.812424 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-acl-logging/0.log" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.824244 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bpx9l_5bdcb274-14da-4683-8c0a-0b71e2d2a16f/ovn-controller/0.log" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826559 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" exitCode=0 Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826598 4857 generic.go:334] "Generic (PLEG): container finished" podID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" containerID="584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" exitCode=0 Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826691 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20"} Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826719 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389"} Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826731 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" event={"ID":"5bdcb274-14da-4683-8c0a-0b71e2d2a16f","Type":"ContainerDied","Data":"0f95f7d1b3d34e34c98ede14883fff7a0cef047f4bf19eef28c38dce50514240"} Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826769 4857 scope.go:117] "RemoveContainer" containerID="c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.826993 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bpx9l" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.832296 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/2.log" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.861220 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.867451 4857 scope.go:117] "RemoveContainer" containerID="27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.868693 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bpx9l"] Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.874606 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bpx9l"] Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.901842 4857 scope.go:117] "RemoveContainer" containerID="a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.920355 4857 scope.go:117] "RemoveContainer" containerID="43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.960897 4857 scope.go:117] "RemoveContainer" containerID="d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" Mar 18 14:13:18 crc kubenswrapper[4857]: I0318 14:13:18.984536 4857 scope.go:117] "RemoveContainer" containerID="584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.013693 4857 scope.go:117] "RemoveContainer" containerID="b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.039429 4857 scope.go:117] "RemoveContainer" containerID="42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.069295 4857 scope.go:117] "RemoveContainer" containerID="64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.100723 4857 scope.go:117] "RemoveContainer" containerID="c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.101392 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c\": container with ID starting with c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c not found: ID does not exist" containerID="c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.101436 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c"} err="failed to get container status \"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c\": rpc error: code = NotFound desc = could not find container \"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c\": container with ID starting with c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.101468 4857 scope.go:117] "RemoveContainer" containerID="27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.102073 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\": container with ID starting with 27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272 not found: ID does not exist" containerID="27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.102133 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272"} err="failed to get container status \"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\": rpc error: code = NotFound desc = could not find container \"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\": container with ID starting with 27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.102174 4857 scope.go:117] "RemoveContainer" containerID="a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.102602 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\": container with ID starting with a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf not found: ID does not exist" containerID="a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.102633 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf"} err="failed to get container status \"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\": rpc error: code = NotFound desc = could not find container \"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\": container with ID starting with a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.102656 4857 scope.go:117] "RemoveContainer" containerID="43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.103084 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\": container with ID starting with 43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811 not found: ID does not exist" containerID="43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.103118 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811"} err="failed to get container status \"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\": rpc error: code = NotFound desc = could not find container \"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\": container with ID starting with 43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.103138 4857 scope.go:117] "RemoveContainer" containerID="d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.103495 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\": container with ID starting with d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20 not found: ID does not exist" containerID="d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.103523 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20"} err="failed to get container status \"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\": rpc error: code = NotFound desc = could not find container \"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\": container with ID starting with d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.103542 4857 scope.go:117] "RemoveContainer" containerID="584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.103984 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\": container with ID starting with 584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389 not found: ID does not exist" containerID="584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104011 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389"} err="failed to get container status \"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\": rpc error: code = NotFound desc = could not find container \"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\": container with ID starting with 584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104029 4857 scope.go:117] "RemoveContainer" containerID="b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.104310 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\": container with ID starting with b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1 not found: ID does not exist" containerID="b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104373 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1"} err="failed to get container status \"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\": rpc error: code = NotFound desc = could not find container \"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\": container with ID starting with b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104400 4857 scope.go:117] "RemoveContainer" containerID="42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.104851 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\": container with ID starting with 42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45 not found: ID does not exist" containerID="42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104881 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45"} err="failed to get container status \"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\": rpc error: code = NotFound desc = could not find container \"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\": container with ID starting with 42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.104900 4857 scope.go:117] "RemoveContainer" containerID="64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb" Mar 18 14:13:19 crc kubenswrapper[4857]: E0318 14:13:19.105240 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\": container with ID starting with 64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb not found: ID does not exist" containerID="64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.105300 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb"} err="failed to get container status \"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\": rpc error: code = NotFound desc = could not find container \"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\": container with ID starting with 64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.105338 4857 scope.go:117] "RemoveContainer" containerID="c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.105821 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c"} err="failed to get container status \"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c\": rpc error: code = NotFound desc = could not find container \"c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c\": container with ID starting with c96dee519fee998cbced28f8deb35f3693b2a01cfef96548a4c2c3d720e92e8c not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.105849 4857 scope.go:117] "RemoveContainer" containerID="27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106192 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272"} err="failed to get container status \"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\": rpc error: code = NotFound desc = could not find container \"27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272\": container with ID starting with 27ba87dece891fe6db49e93d5e7f83d791caffc3a7c97f7ce5568e73f457c272 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106218 4857 scope.go:117] "RemoveContainer" containerID="a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106596 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf"} err="failed to get container status \"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\": rpc error: code = NotFound desc = could not find container \"a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf\": container with ID starting with a7dc92818b37c6aed78a7ef129f1ac5f562943388477b66810e3cee546fcabcf not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106628 4857 scope.go:117] "RemoveContainer" containerID="43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106922 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811"} err="failed to get container status \"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\": rpc error: code = NotFound desc = could not find container \"43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811\": container with ID starting with 43dcb270187a137319c30073e8574d4f6d64f9fce1b055bdd754117a2d2bd811 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.106945 4857 scope.go:117] "RemoveContainer" containerID="d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.107253 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20"} err="failed to get container status \"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\": rpc error: code = NotFound desc = could not find container \"d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20\": container with ID starting with d043bde85d466d1be751036e67d111e0fda76ba5df112246014893d85f2a9a20 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.107306 4857 scope.go:117] "RemoveContainer" containerID="584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.107876 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389"} err="failed to get container status \"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\": rpc error: code = NotFound desc = could not find container \"584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389\": container with ID starting with 584a54a79ee591ab5544d19ffed5ec34389394a6d97f41473a10f93032ad2389 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.107903 4857 scope.go:117] "RemoveContainer" containerID="b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.108185 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1"} err="failed to get container status \"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\": rpc error: code = NotFound desc = could not find container \"b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1\": container with ID starting with b4f02b6a5f6ba97dfe8f9766ea58cf21980f33ea2aa4ba739403234ff35e18e1 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.108228 4857 scope.go:117] "RemoveContainer" containerID="42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.108532 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45"} err="failed to get container status \"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\": rpc error: code = NotFound desc = could not find container \"42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45\": container with ID starting with 42393f8cfd1aa33d0dee0eaa449edc6e6ac0cbbcbd3770ef3db76346fad31c45 not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.108570 4857 scope.go:117] "RemoveContainer" containerID="64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.108800 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb"} err="failed to get container status \"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\": rpc error: code = NotFound desc = could not find container \"64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb\": container with ID starting with 64fb1142bc82817ab6730997b62c656cb744eb88a93bfec117393eec3710a1bb not found: ID does not exist" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.173277 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bdcb274-14da-4683-8c0a-0b71e2d2a16f" path="/var/lib/kubelet/pods/5bdcb274-14da-4683-8c0a-0b71e2d2a16f/volumes" Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.838630 4857 generic.go:334] "Generic (PLEG): container finished" podID="d50187d9-f94c-4f95-87f4-1065bb1d9eed" containerID="3f31333e0e3e21f7df935082fe84a609b348a584b5b4ade6a47b82791eee49c9" exitCode=0 Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.838741 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerDied","Data":"3f31333e0e3e21f7df935082fe84a609b348a584b5b4ade6a47b82791eee49c9"} Mar 18 14:13:19 crc kubenswrapper[4857]: I0318 14:13:19.839019 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"3afde3a05a653e2a3dd4b6c40a0122532b42c6c1535ed9a400f39cf2c6531b24"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.876836 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"63d2bebceab29539363be861acc90d0bff58a6564fa17ea50c74fab8a7b08f07"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.877200 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"a91e5c900948fb1d9ca3396067d901e89a336a203306cac114d0d37741d02923"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.877219 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"c16ca653a22af1e514bb7a79ecfb0c6e4ef94bbead79dc9ea5e7114e12e723a4"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.877230 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"27cf2637ec25def81cbc815c7d93a5892a7ae2997aa470f115708f7c6cf834a7"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.877251 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"aa983bd82fb26057ddbbc2b648679f7e54038b4c952beb1317342525f08287c2"} Mar 18 14:13:20 crc kubenswrapper[4857]: I0318 14:13:20.877265 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"6664c8489612b5c52b0e39e21cb8eb6d37a02b558bd3ea0e078496dbadeaad1f"} Mar 18 14:13:23 crc kubenswrapper[4857]: I0318 14:13:23.902866 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"de247ae275ebce569033774c06376110fdaa3490db450e32001f52f97fc55ebd"} Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.002061 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" event={"ID":"d50187d9-f94c-4f95-87f4-1065bb1d9eed","Type":"ContainerStarted","Data":"b5491fc945a1dea7afb57ab3dca915a26b266e54884236a5e9a348eaa68f0474"} Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.002636 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.002740 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.003403 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.055304 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.087352 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:26 crc kubenswrapper[4857]: I0318 14:13:26.093161 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" podStartSLOduration=8.09309946 podStartE2EDuration="8.09309946s" podCreationTimestamp="2026-03-18 14:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:13:26.074286907 +0000 UTC m=+790.203415364" watchObservedRunningTime="2026-03-18 14:13:26.09309946 +0000 UTC m=+790.222227917" Mar 18 14:13:28 crc kubenswrapper[4857]: I0318 14:13:28.163576 4857 scope.go:117] "RemoveContainer" containerID="45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31" Mar 18 14:13:28 crc kubenswrapper[4857]: E0318 14:13:28.164255 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bdlm5_openshift-multus(0ca53fe8-513c-4226-8659-208b304ffb78)\"" pod="openshift-multus/multus-bdlm5" podUID="0ca53fe8-513c-4226-8659-208b304ffb78" Mar 18 14:13:29 crc kubenswrapper[4857]: I0318 14:13:29.990993 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b"] Mar 18 14:13:29 crc kubenswrapper[4857]: I0318 14:13:29.992791 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:29 crc kubenswrapper[4857]: I0318 14:13:29.996513 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zspkz" Mar 18 14:13:29 crc kubenswrapper[4857]: I0318 14:13:29.996586 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 14:13:29 crc kubenswrapper[4857]: I0318 14:13:29.996539 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.015388 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.109368 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjsrd\" (UniqueName: \"kubernetes.io/projected/501dc1bd-0a04-4aef-bff8-43c9e767215f-kube-api-access-qjsrd\") pod \"obo-prometheus-operator-8ff7d675-lsp5b\" (UID: \"501dc1bd-0a04-4aef-bff8-43c9e767215f\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.210662 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjsrd\" (UniqueName: \"kubernetes.io/projected/501dc1bd-0a04-4aef-bff8-43c9e767215f-kube-api-access-qjsrd\") pod \"obo-prometheus-operator-8ff7d675-lsp5b\" (UID: \"501dc1bd-0a04-4aef-bff8-43c9e767215f\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.246631 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjsrd\" (UniqueName: \"kubernetes.io/projected/501dc1bd-0a04-4aef-bff8-43c9e767215f-kube-api-access-qjsrd\") pod \"obo-prometheus-operator-8ff7d675-lsp5b\" (UID: \"501dc1bd-0a04-4aef-bff8-43c9e767215f\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.314645 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.358644 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(a9f566dfc566dd6b255cf2700dd073e48f0a1609993793880a385effa675d6a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.358791 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(a9f566dfc566dd6b255cf2700dd073e48f0a1609993793880a385effa675d6a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.358906 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(a9f566dfc566dd6b255cf2700dd073e48f0a1609993793880a385effa675d6a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.358965 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(a9f566dfc566dd6b255cf2700dd073e48f0a1609993793880a385effa675d6a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" podUID="501dc1bd-0a04-4aef-bff8-43c9e767215f" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.501425 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.502246 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.504038 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.504507 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-9wt2n" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.512732 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.514331 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.536690 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.543547 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.615804 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.615922 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.616013 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.616274 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.720176 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.720303 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.720343 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.720396 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.725778 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.727411 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518b89ef-5060-4ec2-9a2d-7c64fa3555a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh\" (UID: \"518b89ef-5060-4ec2-9a2d-7c64fa3555a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.728666 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.730243 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj\" (UID: \"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.807801 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-5mw69"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.808679 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.811167 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7wpl9" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.812179 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.820819 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.820881 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.821036 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcm89\" (UniqueName: \"kubernetes.io/projected/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-kube-api-access-xcm89\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.827431 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-5mw69"] Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.838956 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.855028 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(aef36af6ff18fd31040afe2612cd6df2f23b61ccf5763393e37c6226dd1846f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.855127 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(aef36af6ff18fd31040afe2612cd6df2f23b61ccf5763393e37c6226dd1846f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.855170 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(aef36af6ff18fd31040afe2612cd6df2f23b61ccf5763393e37c6226dd1846f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.855221 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(aef36af6ff18fd31040afe2612cd6df2f23b61ccf5763393e37c6226dd1846f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" podUID="518b89ef-5060-4ec2-9a2d-7c64fa3555a5" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.873539 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ab4b2235d142938064b5d39b19349d0460f8301536599535480d1348266963cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.873622 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ab4b2235d142938064b5d39b19349d0460f8301536599535480d1348266963cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.873656 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ab4b2235d142938064b5d39b19349d0460f8301536599535480d1348266963cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:30 crc kubenswrapper[4857]: E0318 14:13:30.873714 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ab4b2235d142938064b5d39b19349d0460f8301536599535480d1348266963cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" podUID="ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.922263 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcm89\" (UniqueName: \"kubernetes.io/projected/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-kube-api-access-xcm89\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.922358 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.927405 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:30 crc kubenswrapper[4857]: I0318 14:13:30.942590 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcm89\" (UniqueName: \"kubernetes.io/projected/264f3d7a-0c38-4d0a-9ff7-4f3a24164f59-kube-api-access-xcm89\") pod \"observability-operator-6dd7dd855f-5mw69\" (UID: \"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59\") " pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.192073 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.201594 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.202092 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.202284 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.202487 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.202671 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.203852 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.277333 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-6c9d87fc97-ddtxj"] Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.278495 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.281997 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nxf6t" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.282616 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.316913 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(dec01782236420b2fa73337de373eb1c04148f3b988a7083c3d5d72750716f92): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.317228 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(dec01782236420b2fa73337de373eb1c04148f3b988a7083c3d5d72750716f92): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.317338 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(dec01782236420b2fa73337de373eb1c04148f3b988a7083c3d5d72750716f92): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.317513 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(dec01782236420b2fa73337de373eb1c04148f3b988a7083c3d5d72750716f92): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.331024 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-webhook-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.331222 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7szr\" (UniqueName: \"kubernetes.io/projected/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-kube-api-access-m7szr\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.331276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-apiservice-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.331320 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-openshift-service-ca\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.334664 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6c9d87fc97-ddtxj"] Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.418954 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(4737fb03b472d3016df8d8d3816d98caed099742b9c125b9dd428da3859ea34d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.419058 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(4737fb03b472d3016df8d8d3816d98caed099742b9c125b9dd428da3859ea34d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.419090 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(4737fb03b472d3016df8d8d3816d98caed099742b9c125b9dd428da3859ea34d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.419165 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(4737fb03b472d3016df8d8d3816d98caed099742b9c125b9dd428da3859ea34d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" podUID="ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430288 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(1d9113b741cad860b2badf17d55b73ea3d25369e02e8880ad82a193a4a230ef6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430325 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(f192e6d1886b7e606eaa857e57468b332225eb291c97e12e9bdd208b94e031d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430366 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(1d9113b741cad860b2badf17d55b73ea3d25369e02e8880ad82a193a4a230ef6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430390 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(1d9113b741cad860b2badf17d55b73ea3d25369e02e8880ad82a193a4a230ef6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430423 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(f192e6d1886b7e606eaa857e57468b332225eb291c97e12e9bdd208b94e031d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430447 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(f192e6d1886b7e606eaa857e57468b332225eb291c97e12e9bdd208b94e031d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430445 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(1d9113b741cad860b2badf17d55b73ea3d25369e02e8880ad82a193a4a230ef6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" podUID="518b89ef-5060-4ec2-9a2d-7c64fa3555a5" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.430503 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(f192e6d1886b7e606eaa857e57468b332225eb291c97e12e9bdd208b94e031d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" podUID="501dc1bd-0a04-4aef-bff8-43c9e767215f" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.441462 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7szr\" (UniqueName: \"kubernetes.io/projected/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-kube-api-access-m7szr\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.441594 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-apiservice-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.441655 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-openshift-service-ca\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.441777 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-webhook-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.443542 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-openshift-service-ca\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.446638 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-apiservice-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.446933 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-webhook-cert\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.474321 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7szr\" (UniqueName: \"kubernetes.io/projected/79d3df2c-25f0-4e16-a39d-cc0d6a85277f-kube-api-access-m7szr\") pod \"perses-operator-6c9d87fc97-ddtxj\" (UID: \"79d3df2c-25f0-4e16-a39d-cc0d6a85277f\") " pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: I0318 14:13:31.611590 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.636035 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(adbf9d4c5059f7306ee98bae7862ff637f7a1fd1e3ca3d2af7850d7ca80bfe1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.636239 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(adbf9d4c5059f7306ee98bae7862ff637f7a1fd1e3ca3d2af7850d7ca80bfe1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.636328 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(adbf9d4c5059f7306ee98bae7862ff637f7a1fd1e3ca3d2af7850d7ca80bfe1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:31 crc kubenswrapper[4857]: E0318 14:13:31.636438 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-6c9d87fc97-ddtxj_openshift-operators(79d3df2c-25f0-4e16-a39d-cc0d6a85277f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-6c9d87fc97-ddtxj_openshift-operators(79d3df2c-25f0-4e16-a39d-cc0d6a85277f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(adbf9d4c5059f7306ee98bae7862ff637f7a1fd1e3ca3d2af7850d7ca80bfe1d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" Mar 18 14:13:32 crc kubenswrapper[4857]: I0318 14:13:32.207987 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:32 crc kubenswrapper[4857]: I0318 14:13:32.207990 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:32 crc kubenswrapper[4857]: I0318 14:13:32.209528 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:32 crc kubenswrapper[4857]: I0318 14:13:32.209643 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.251069 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(76acdebedc79b0e24740a50d4392a3043388893519c6a8971579e387bca4d905): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.251171 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(76acdebedc79b0e24740a50d4392a3043388893519c6a8971579e387bca4d905): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.251234 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(76acdebedc79b0e24740a50d4392a3043388893519c6a8971579e387bca4d905): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.251547 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-6c9d87fc97-ddtxj_openshift-operators(79d3df2c-25f0-4e16-a39d-cc0d6a85277f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-6c9d87fc97-ddtxj_openshift-operators(79d3df2c-25f0-4e16-a39d-cc0d6a85277f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-6c9d87fc97-ddtxj_openshift-operators_79d3df2c-25f0-4e16-a39d-cc0d6a85277f_0(76acdebedc79b0e24740a50d4392a3043388893519c6a8971579e387bca4d905): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.259410 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(8ec7b6e68677cce36dd019f09d5bb52d33ab7c05b44406e8b6b56ff4dc14c84b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.259500 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(8ec7b6e68677cce36dd019f09d5bb52d33ab7c05b44406e8b6b56ff4dc14c84b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.259533 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(8ec7b6e68677cce36dd019f09d5bb52d33ab7c05b44406e8b6b56ff4dc14c84b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:32 crc kubenswrapper[4857]: E0318 14:13:32.259591 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(8ec7b6e68677cce36dd019f09d5bb52d33ab7c05b44406e8b6b56ff4dc14c84b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" Mar 18 14:13:42 crc kubenswrapper[4857]: I0318 14:13:42.162996 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:42 crc kubenswrapper[4857]: I0318 14:13:42.163941 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:42 crc kubenswrapper[4857]: E0318 14:13:42.435477 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(f6cb32dc92357d4fc97f5f7ce156cff99320b9c171b94d31db30a44e5512b173): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:42 crc kubenswrapper[4857]: E0318 14:13:42.435558 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(f6cb32dc92357d4fc97f5f7ce156cff99320b9c171b94d31db30a44e5512b173): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:42 crc kubenswrapper[4857]: E0318 14:13:42.435578 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(f6cb32dc92357d4fc97f5f7ce156cff99320b9c171b94d31db30a44e5512b173): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:42 crc kubenswrapper[4857]: E0318 14:13:42.435630 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators(518b89ef-5060-4ec2-9a2d-7c64fa3555a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_openshift-operators_518b89ef-5060-4ec2-9a2d-7c64fa3555a5_0(f6cb32dc92357d4fc97f5f7ce156cff99320b9c171b94d31db30a44e5512b173): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" podUID="518b89ef-5060-4ec2-9a2d-7c64fa3555a5" Mar 18 14:13:43 crc kubenswrapper[4857]: I0318 14:13:43.163160 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:43 crc kubenswrapper[4857]: I0318 14:13:43.163897 4857 scope.go:117] "RemoveContainer" containerID="45a9291d4a21b73a2d2525588d7034bced37db496453fd754ffb73605fe68b31" Mar 18 14:13:43 crc kubenswrapper[4857]: I0318 14:13:43.163984 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:43 crc kubenswrapper[4857]: E0318 14:13:43.216143 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ef458d10891ec2283d7fed5a87956dab1e3886074c833c34d8c2c5b3b6b843ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:43 crc kubenswrapper[4857]: E0318 14:13:43.216227 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ef458d10891ec2283d7fed5a87956dab1e3886074c833c34d8c2c5b3b6b843ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:43 crc kubenswrapper[4857]: E0318 14:13:43.216261 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ef458d10891ec2283d7fed5a87956dab1e3886074c833c34d8c2c5b3b6b843ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:43 crc kubenswrapper[4857]: E0318 14:13:43.216379 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators(ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_openshift-operators_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47_0(ef458d10891ec2283d7fed5a87956dab1e3886074c833c34d8c2c5b3b6b843ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" podUID="ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47" Mar 18 14:13:43 crc kubenswrapper[4857]: I0318 14:13:43.396434 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bdlm5_0ca53fe8-513c-4226-8659-208b304ffb78/kube-multus/2.log" Mar 18 14:13:43 crc kubenswrapper[4857]: I0318 14:13:43.396505 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bdlm5" event={"ID":"0ca53fe8-513c-4226-8659-208b304ffb78","Type":"ContainerStarted","Data":"a739447646db38009cd1ddaf537ce23fc4f5d7cdbf25b70dd233a3873206b785"} Mar 18 14:13:44 crc kubenswrapper[4857]: I0318 14:13:44.167249 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:44 crc kubenswrapper[4857]: I0318 14:13:44.168320 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:44 crc kubenswrapper[4857]: E0318 14:13:44.196101 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(221916a9be50ccfba49e84fc220dfab370a97f503959a24ab5468dc26cd71d48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:44 crc kubenswrapper[4857]: E0318 14:13:44.196220 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(221916a9be50ccfba49e84fc220dfab370a97f503959a24ab5468dc26cd71d48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:44 crc kubenswrapper[4857]: E0318 14:13:44.196260 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(221916a9be50ccfba49e84fc220dfab370a97f503959a24ab5468dc26cd71d48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:44 crc kubenswrapper[4857]: E0318 14:13:44.196367 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators(501dc1bd-0a04-4aef-bff8-43c9e767215f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-8ff7d675-lsp5b_openshift-operators_501dc1bd-0a04-4aef-bff8-43c9e767215f_0(221916a9be50ccfba49e84fc220dfab370a97f503959a24ab5468dc26cd71d48): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" podUID="501dc1bd-0a04-4aef-bff8-43c9e767215f" Mar 18 14:13:45 crc kubenswrapper[4857]: I0318 14:13:45.163537 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:45 crc kubenswrapper[4857]: I0318 14:13:45.164210 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:45 crc kubenswrapper[4857]: E0318 14:13:45.201405 4857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(2352e66cb1b8a68fb4cb8a1441549bf2ff103e7f20dfeb02b5f6286d187bd23c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 18 14:13:45 crc kubenswrapper[4857]: E0318 14:13:45.201580 4857 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(2352e66cb1b8a68fb4cb8a1441549bf2ff103e7f20dfeb02b5f6286d187bd23c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:45 crc kubenswrapper[4857]: E0318 14:13:45.201618 4857 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(2352e66cb1b8a68fb4cb8a1441549bf2ff103e7f20dfeb02b5f6286d187bd23c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:45 crc kubenswrapper[4857]: E0318 14:13:45.201771 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-6dd7dd855f-5mw69_openshift-operators(264f3d7a-0c38-4d0a-9ff7-4f3a24164f59)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-6dd7dd855f-5mw69_openshift-operators_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59_0(2352e66cb1b8a68fb4cb8a1441549bf2ff103e7f20dfeb02b5f6286d187bd23c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" Mar 18 14:13:46 crc kubenswrapper[4857]: I0318 14:13:46.163355 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:46 crc kubenswrapper[4857]: I0318 14:13:46.164365 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:46 crc kubenswrapper[4857]: W0318 14:13:46.502234 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d3df2c_25f0_4e16_a39d_cc0d6a85277f.slice/crio-ca84142c891f1080f0447b0e59917de0722625ddffaecef7b5e19a9435e0835a WatchSource:0}: Error finding container ca84142c891f1080f0447b0e59917de0722625ddffaecef7b5e19a9435e0835a: Status 404 returned error can't find the container with id ca84142c891f1080f0447b0e59917de0722625ddffaecef7b5e19a9435e0835a Mar 18 14:13:46 crc kubenswrapper[4857]: I0318 14:13:46.505956 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6c9d87fc97-ddtxj"] Mar 18 14:13:47 crc kubenswrapper[4857]: I0318 14:13:47.426561 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" event={"ID":"79d3df2c-25f0-4e16-a39d-cc0d6a85277f","Type":"ContainerStarted","Data":"ca84142c891f1080f0447b0e59917de0722625ddffaecef7b5e19a9435e0835a"} Mar 18 14:13:49 crc kubenswrapper[4857]: I0318 14:13:49.062654 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjnmh" Mar 18 14:13:55 crc kubenswrapper[4857]: I0318 14:13:55.954383 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" event={"ID":"79d3df2c-25f0-4e16-a39d-cc0d6a85277f","Type":"ContainerStarted","Data":"e61a9490fb9d8b065fe8b0169d3268de72dc2520de9e78c2d2b5e91593f47706"} Mar 18 14:13:55 crc kubenswrapper[4857]: I0318 14:13:55.955294 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.163074 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.163722 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.164238 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.164554 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.778333 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podStartSLOduration=17.116329016 podStartE2EDuration="25.778306644s" podCreationTimestamp="2026-03-18 14:13:31 +0000 UTC" firstStartedPulling="2026-03-18 14:13:46.504830933 +0000 UTC m=+810.633959390" lastFinishedPulling="2026-03-18 14:13:55.166808561 +0000 UTC m=+819.295937018" observedRunningTime="2026-03-18 14:13:55.987171075 +0000 UTC m=+820.116299532" watchObservedRunningTime="2026-03-18 14:13:56.778306644 +0000 UTC m=+820.907435101" Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.783090 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh"] Mar 18 14:13:56 crc kubenswrapper[4857]: W0318 14:13:56.791655 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518b89ef_5060_4ec2_9a2d_7c64fa3555a5.slice/crio-68b4b647689bceb91f18494f5923b9fe8d75287d4f64ed878280aff8df98df8a WatchSource:0}: Error finding container 68b4b647689bceb91f18494f5923b9fe8d75287d4f64ed878280aff8df98df8a: Status 404 returned error can't find the container with id 68b4b647689bceb91f18494f5923b9fe8d75287d4f64ed878280aff8df98df8a Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.796716 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.891443 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b"] Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.962656 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" event={"ID":"501dc1bd-0a04-4aef-bff8-43c9e767215f","Type":"ContainerStarted","Data":"44763ea92d2807e94f043d78ef9c0f3dde550a1dec9ac5a3476c874f2e25fac2"} Mar 18 14:13:56 crc kubenswrapper[4857]: I0318 14:13:56.964062 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" event={"ID":"518b89ef-5060-4ec2-9a2d-7c64fa3555a5","Type":"ContainerStarted","Data":"68b4b647689bceb91f18494f5923b9fe8d75287d4f64ed878280aff8df98df8a"} Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.038742 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.038833 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.162590 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.169742 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.601184 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-5mw69"] Mar 18 14:13:57 crc kubenswrapper[4857]: I0318 14:13:57.979600 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" event={"ID":"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59","Type":"ContainerStarted","Data":"0e252e25c1e45dc7ef701cd17176c27f99f369bd5d2992ea06783805afe51a2d"} Mar 18 14:13:58 crc kubenswrapper[4857]: I0318 14:13:58.163026 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:58 crc kubenswrapper[4857]: I0318 14:13:58.163603 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" Mar 18 14:13:58 crc kubenswrapper[4857]: I0318 14:13:58.710555 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj"] Mar 18 14:13:59 crc kubenswrapper[4857]: I0318 14:13:59.994245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" event={"ID":"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47","Type":"ContainerStarted","Data":"211c79fb59eed5877d83fe0f0c0e053101301a356b1b1c1d491240e3c2f1d9f8"} Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.134356 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564054-nzjjp"] Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.135478 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.139049 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.139284 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.139398 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.146356 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564054-nzjjp"] Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.395485 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpw4\" (UniqueName: \"kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4\") pod \"auto-csr-approver-29564054-nzjjp\" (UID: \"dfa12b13-20a5-4a32-adf5-6ac63823cce8\") " pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.498673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpw4\" (UniqueName: \"kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4\") pod \"auto-csr-approver-29564054-nzjjp\" (UID: \"dfa12b13-20a5-4a32-adf5-6ac63823cce8\") " pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.520435 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpw4\" (UniqueName: \"kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4\") pod \"auto-csr-approver-29564054-nzjjp\" (UID: \"dfa12b13-20a5-4a32-adf5-6ac63823cce8\") " pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:00 crc kubenswrapper[4857]: I0318 14:14:00.756358 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.004222 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" event={"ID":"501dc1bd-0a04-4aef-bff8-43c9e767215f","Type":"ContainerStarted","Data":"1b264ebf3b2804aea8b176a91a310a70c92d81d889cb6950b37a178adc243352"} Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.010366 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" event={"ID":"ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47","Type":"ContainerStarted","Data":"942adf964596d9b58c3f60b4ee5dea6112db78417621bc5e1ad497d7211ec5d8"} Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.012591 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" event={"ID":"518b89ef-5060-4ec2-9a2d-7c64fa3555a5","Type":"ContainerStarted","Data":"e04599a55c042d7c16e847eae4b81ebb72f018dec49ef7815fdd731303e8b557"} Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.027264 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-lsp5b" podStartSLOduration=29.006361016 podStartE2EDuration="32.02724161s" podCreationTimestamp="2026-03-18 14:13:29 +0000 UTC" firstStartedPulling="2026-03-18 14:13:56.910658561 +0000 UTC m=+821.039787018" lastFinishedPulling="2026-03-18 14:13:59.931539155 +0000 UTC m=+824.060667612" observedRunningTime="2026-03-18 14:14:01.020986513 +0000 UTC m=+825.150114980" watchObservedRunningTime="2026-03-18 14:14:01.02724161 +0000 UTC m=+825.156370067" Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.042336 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564054-nzjjp"] Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.048885 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh" podStartSLOduration=27.919563624 podStartE2EDuration="31.048855053s" podCreationTimestamp="2026-03-18 14:13:30 +0000 UTC" firstStartedPulling="2026-03-18 14:13:56.796379248 +0000 UTC m=+820.925507705" lastFinishedPulling="2026-03-18 14:13:59.925670677 +0000 UTC m=+824.054799134" observedRunningTime="2026-03-18 14:14:01.044517374 +0000 UTC m=+825.173645851" watchObservedRunningTime="2026-03-18 14:14:01.048855053 +0000 UTC m=+825.177983520" Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.083196 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj" podStartSLOduration=29.929504042 podStartE2EDuration="31.083176726s" podCreationTimestamp="2026-03-18 14:13:30 +0000 UTC" firstStartedPulling="2026-03-18 14:13:59.173360104 +0000 UTC m=+823.302488561" lastFinishedPulling="2026-03-18 14:14:00.327032788 +0000 UTC m=+824.456161245" observedRunningTime="2026-03-18 14:14:01.082297714 +0000 UTC m=+825.211426201" watchObservedRunningTime="2026-03-18 14:14:01.083176726 +0000 UTC m=+825.212305183" Mar 18 14:14:01 crc kubenswrapper[4857]: I0318 14:14:01.614792 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 14:14:02 crc kubenswrapper[4857]: I0318 14:14:02.021875 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" event={"ID":"dfa12b13-20a5-4a32-adf5-6ac63823cce8","Type":"ContainerStarted","Data":"ec12ad10b97e7cb0fc0c4edd983117718f59931b7a0c1b91469afa7899cddfde"} Mar 18 14:14:03 crc kubenswrapper[4857]: I0318 14:14:03.031008 4857 generic.go:334] "Generic (PLEG): container finished" podID="dfa12b13-20a5-4a32-adf5-6ac63823cce8" containerID="c96c979c26a0bc220c57255506b54c476e6e207d8ca5791652e18ffdf79b2241" exitCode=0 Mar 18 14:14:03 crc kubenswrapper[4857]: I0318 14:14:03.031162 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" event={"ID":"dfa12b13-20a5-4a32-adf5-6ac63823cce8","Type":"ContainerDied","Data":"c96c979c26a0bc220c57255506b54c476e6e207d8ca5791652e18ffdf79b2241"} Mar 18 14:14:04 crc kubenswrapper[4857]: I0318 14:14:04.444431 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:04 crc kubenswrapper[4857]: I0318 14:14:04.631409 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndpw4\" (UniqueName: \"kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4\") pod \"dfa12b13-20a5-4a32-adf5-6ac63823cce8\" (UID: \"dfa12b13-20a5-4a32-adf5-6ac63823cce8\") " Mar 18 14:14:04 crc kubenswrapper[4857]: I0318 14:14:04.651114 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4" (OuterVolumeSpecName: "kube-api-access-ndpw4") pod "dfa12b13-20a5-4a32-adf5-6ac63823cce8" (UID: "dfa12b13-20a5-4a32-adf5-6ac63823cce8"). InnerVolumeSpecName "kube-api-access-ndpw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:14:04 crc kubenswrapper[4857]: I0318 14:14:04.735382 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndpw4\" (UniqueName: \"kubernetes.io/projected/dfa12b13-20a5-4a32-adf5-6ac63823cce8-kube-api-access-ndpw4\") on node \"crc\" DevicePath \"\"" Mar 18 14:14:05 crc kubenswrapper[4857]: I0318 14:14:05.063721 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" event={"ID":"dfa12b13-20a5-4a32-adf5-6ac63823cce8","Type":"ContainerDied","Data":"ec12ad10b97e7cb0fc0c4edd983117718f59931b7a0c1b91469afa7899cddfde"} Mar 18 14:14:05 crc kubenswrapper[4857]: I0318 14:14:05.063805 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec12ad10b97e7cb0fc0c4edd983117718f59931b7a0c1b91469afa7899cddfde" Mar 18 14:14:05 crc kubenswrapper[4857]: I0318 14:14:05.063795 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564054-nzjjp" Mar 18 14:14:05 crc kubenswrapper[4857]: I0318 14:14:05.517877 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564048-fjxnk"] Mar 18 14:14:05 crc kubenswrapper[4857]: I0318 14:14:05.523549 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564048-fjxnk"] Mar 18 14:14:07 crc kubenswrapper[4857]: I0318 14:14:07.177614 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d743e0-c9e4-4682-bfd1-e80f5522b013" path="/var/lib/kubelet/pods/09d743e0-c9e4-4682-bfd1-e80f5522b013/volumes" Mar 18 14:14:08 crc kubenswrapper[4857]: I0318 14:14:08.092656 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" event={"ID":"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59","Type":"ContainerStarted","Data":"23adb73f740dbf82d50af3ac9a84d6751f75602c16a4ad609ec63adf6b75f7f4"} Mar 18 14:14:08 crc kubenswrapper[4857]: I0318 14:14:08.093270 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:14:08 crc kubenswrapper[4857]: I0318 14:14:08.098342 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 14:14:08 crc kubenswrapper[4857]: I0318 14:14:08.117222 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podStartSLOduration=28.280467227 podStartE2EDuration="38.117202119s" podCreationTimestamp="2026-03-18 14:13:30 +0000 UTC" firstStartedPulling="2026-03-18 14:13:57.611442418 +0000 UTC m=+821.740570875" lastFinishedPulling="2026-03-18 14:14:07.44817732 +0000 UTC m=+831.577305767" observedRunningTime="2026-03-18 14:14:08.113302941 +0000 UTC m=+832.242431418" watchObservedRunningTime="2026-03-18 14:14:08.117202119 +0000 UTC m=+832.246330576" Mar 18 14:14:14 crc kubenswrapper[4857]: I0318 14:14:14.844244 4857 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.346714 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mtghx"] Mar 18 14:14:16 crc kubenswrapper[4857]: E0318 14:14:16.347379 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa12b13-20a5-4a32-adf5-6ac63823cce8" containerName="oc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.347405 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa12b13-20a5-4a32-adf5-6ac63823cce8" containerName="oc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.347585 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa12b13-20a5-4a32-adf5-6ac63823cce8" containerName="oc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.348233 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.353905 4857 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bth4x" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.354300 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.357136 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-sdd8s"] Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.365077 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.370670 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mtghx"] Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.370789 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sdd8s" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.376410 4857 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-w8vpb" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.380279 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sdd8s"] Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.431606 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mrtkc"] Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.433435 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.436868 4857 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-trkqn" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.450301 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mrtkc"] Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.531329 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjw4g\" (UniqueName: \"kubernetes.io/projected/14fe2326-441d-48c7-b4df-cc067beaadff-kube-api-access-zjw4g\") pod \"cert-manager-cainjector-cf98fcc89-mtghx\" (UID: \"14fe2326-441d-48c7-b4df-cc067beaadff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.531592 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gb5c\" (UniqueName: \"kubernetes.io/projected/22ba80af-7cf5-4581-abd9-b5078fb0bc48-kube-api-access-8gb5c\") pod \"cert-manager-858654f9db-sdd8s\" (UID: \"22ba80af-7cf5-4581-abd9-b5078fb0bc48\") " pod="cert-manager/cert-manager-858654f9db-sdd8s" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.730384 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjw4g\" (UniqueName: \"kubernetes.io/projected/14fe2326-441d-48c7-b4df-cc067beaadff-kube-api-access-zjw4g\") pod \"cert-manager-cainjector-cf98fcc89-mtghx\" (UID: \"14fe2326-441d-48c7-b4df-cc067beaadff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.730430 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gb5c\" (UniqueName: \"kubernetes.io/projected/22ba80af-7cf5-4581-abd9-b5078fb0bc48-kube-api-access-8gb5c\") pod \"cert-manager-858654f9db-sdd8s\" (UID: \"22ba80af-7cf5-4581-abd9-b5078fb0bc48\") " pod="cert-manager/cert-manager-858654f9db-sdd8s" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.730514 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpvzh\" (UniqueName: \"kubernetes.io/projected/2d9b7b6d-9b28-4a50-8bda-458c3f8088c1-kube-api-access-cpvzh\") pod \"cert-manager-webhook-687f57d79b-mrtkc\" (UID: \"2d9b7b6d-9b28-4a50-8bda-458c3f8088c1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.748938 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjw4g\" (UniqueName: \"kubernetes.io/projected/14fe2326-441d-48c7-b4df-cc067beaadff-kube-api-access-zjw4g\") pod \"cert-manager-cainjector-cf98fcc89-mtghx\" (UID: \"14fe2326-441d-48c7-b4df-cc067beaadff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.749805 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gb5c\" (UniqueName: \"kubernetes.io/projected/22ba80af-7cf5-4581-abd9-b5078fb0bc48-kube-api-access-8gb5c\") pod \"cert-manager-858654f9db-sdd8s\" (UID: \"22ba80af-7cf5-4581-abd9-b5078fb0bc48\") " pod="cert-manager/cert-manager-858654f9db-sdd8s" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.831962 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpvzh\" (UniqueName: \"kubernetes.io/projected/2d9b7b6d-9b28-4a50-8bda-458c3f8088c1-kube-api-access-cpvzh\") pod \"cert-manager-webhook-687f57d79b-mrtkc\" (UID: \"2d9b7b6d-9b28-4a50-8bda-458c3f8088c1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.859854 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpvzh\" (UniqueName: \"kubernetes.io/projected/2d9b7b6d-9b28-4a50-8bda-458c3f8088c1-kube-api-access-cpvzh\") pod \"cert-manager-webhook-687f57d79b-mrtkc\" (UID: \"2d9b7b6d-9b28-4a50-8bda-458c3f8088c1\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:16 crc kubenswrapper[4857]: I0318 14:14:16.976785 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" Mar 18 14:14:17 crc kubenswrapper[4857]: I0318 14:14:17.003294 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sdd8s" Mar 18 14:14:17 crc kubenswrapper[4857]: I0318 14:14:17.067502 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:17 crc kubenswrapper[4857]: I0318 14:14:17.321286 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sdd8s"] Mar 18 14:14:17 crc kubenswrapper[4857]: I0318 14:14:17.556275 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mrtkc"] Mar 18 14:14:17 crc kubenswrapper[4857]: I0318 14:14:17.582156 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-mtghx"] Mar 18 14:14:17 crc kubenswrapper[4857]: W0318 14:14:17.585725 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14fe2326_441d_48c7_b4df_cc067beaadff.slice/crio-c60a0ca1f94ddff156f5dd0068e01eeb258f2f01bb89a73a56ab441ff2166bd6 WatchSource:0}: Error finding container c60a0ca1f94ddff156f5dd0068e01eeb258f2f01bb89a73a56ab441ff2166bd6: Status 404 returned error can't find the container with id c60a0ca1f94ddff156f5dd0068e01eeb258f2f01bb89a73a56ab441ff2166bd6 Mar 18 14:14:18 crc kubenswrapper[4857]: I0318 14:14:18.274154 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sdd8s" event={"ID":"22ba80af-7cf5-4581-abd9-b5078fb0bc48","Type":"ContainerStarted","Data":"fdf739be2b4da2d087d14cac998f4dfa976bc115cf5b2ea65791c2f19f7cee3b"} Mar 18 14:14:18 crc kubenswrapper[4857]: I0318 14:14:18.275637 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" event={"ID":"14fe2326-441d-48c7-b4df-cc067beaadff","Type":"ContainerStarted","Data":"c60a0ca1f94ddff156f5dd0068e01eeb258f2f01bb89a73a56ab441ff2166bd6"} Mar 18 14:14:18 crc kubenswrapper[4857]: I0318 14:14:18.277216 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" event={"ID":"2d9b7b6d-9b28-4a50-8bda-458c3f8088c1","Type":"ContainerStarted","Data":"fdda3ec1067854da5f22cd018ff685787a5fc8a49e31c314e83ac8e1b18bdcbc"} Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.342399 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sdd8s" event={"ID":"22ba80af-7cf5-4581-abd9-b5078fb0bc48","Type":"ContainerStarted","Data":"4fc15d3e3100f8f88994fef99c7ea6fd0ee08c89cff9de4575b226b33acc6c19"} Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.344425 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" event={"ID":"2d9b7b6d-9b28-4a50-8bda-458c3f8088c1","Type":"ContainerStarted","Data":"6e11bfd7f022dbe45bfc2d6fefc9d25a892658d300cb0e35eacff8341f0bd030"} Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.344543 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.345794 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" event={"ID":"14fe2326-441d-48c7-b4df-cc067beaadff","Type":"ContainerStarted","Data":"a36301e6679a140c2d65393a66325b6068c8710733329bf36b0a9140dc818d13"} Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.375033 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-sdd8s" podStartSLOduration=1.678194813 podStartE2EDuration="8.375009282s" podCreationTimestamp="2026-03-18 14:14:16 +0000 UTC" firstStartedPulling="2026-03-18 14:14:17.333087342 +0000 UTC m=+841.462215799" lastFinishedPulling="2026-03-18 14:14:24.029901811 +0000 UTC m=+848.159030268" observedRunningTime="2026-03-18 14:14:24.369953384 +0000 UTC m=+848.499081841" watchObservedRunningTime="2026-03-18 14:14:24.375009282 +0000 UTC m=+848.504137739" Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.397301 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podStartSLOduration=2.013599249 podStartE2EDuration="8.397271187s" podCreationTimestamp="2026-03-18 14:14:16 +0000 UTC" firstStartedPulling="2026-03-18 14:14:17.56568286 +0000 UTC m=+841.694811317" lastFinishedPulling="2026-03-18 14:14:23.949354798 +0000 UTC m=+848.078483255" observedRunningTime="2026-03-18 14:14:24.391325046 +0000 UTC m=+848.520453503" watchObservedRunningTime="2026-03-18 14:14:24.397271187 +0000 UTC m=+848.526399644" Mar 18 14:14:24 crc kubenswrapper[4857]: I0318 14:14:24.426825 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-mtghx" podStartSLOduration=2.066241588 podStartE2EDuration="8.426799005s" podCreationTimestamp="2026-03-18 14:14:16 +0000 UTC" firstStartedPulling="2026-03-18 14:14:17.588409201 +0000 UTC m=+841.717537658" lastFinishedPulling="2026-03-18 14:14:23.948966628 +0000 UTC m=+848.078095075" observedRunningTime="2026-03-18 14:14:24.415422867 +0000 UTC m=+848.544551324" watchObservedRunningTime="2026-03-18 14:14:24.426799005 +0000 UTC m=+848.555927462" Mar 18 14:14:27 crc kubenswrapper[4857]: I0318 14:14:27.039049 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:14:27 crc kubenswrapper[4857]: I0318 14:14:27.039325 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:14:28 crc kubenswrapper[4857]: I0318 14:14:28.733978 4857 scope.go:117] "RemoveContainer" containerID="628bdf9f8205015d3581ce1098abd36c6903e8a90cfed3b38cdca5a80d2dd441" Mar 18 14:14:32 crc kubenswrapper[4857]: I0318 14:14:32.073095 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.038711 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.039544 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.039649 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.040653 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.040800 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18" gracePeriod=600 Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.758911 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18" exitCode=0 Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.758932 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18"} Mar 18 14:14:57 crc kubenswrapper[4857]: I0318 14:14:57.759288 4857 scope.go:117] "RemoveContainer" containerID="a403af6a1c307b8215aed20aa4f32ceac916e2576777434b4f09ea45101b0ec1" Mar 18 14:14:58 crc kubenswrapper[4857]: I0318 14:14:58.770857 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65"} Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.619261 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z"] Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.620819 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.637360 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.641626 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z"] Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.732113 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx"] Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.733461 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.746789 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx"] Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.747393 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.747478 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp85z\" (UniqueName: \"kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.747558 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.849739 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.850215 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp85z\" (UniqueName: \"kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.850293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897lb\" (UniqueName: \"kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.850392 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.850453 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.850532 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.927551 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.927569 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.949933 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp85z\" (UniqueName: \"kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z\") pod \"3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.952107 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-897lb\" (UniqueName: \"kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.952254 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.952384 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.953604 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.953892 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:14:59 crc kubenswrapper[4857]: I0318 14:14:59.986498 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-897lb\" (UniqueName: \"kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb\") pod \"4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.050352 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.142515 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98"] Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.143595 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.145962 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.146106 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.168368 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98"] Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.240836 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.255598 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.255688 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6rw\" (UniqueName: \"kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.255713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.363037 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.363452 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg6rw\" (UniqueName: \"kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.363498 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.365933 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.372458 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.383763 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg6rw\" (UniqueName: \"kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw\") pod \"collect-profiles-29564055-5qq98\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.469134 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.523533 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z"] Mar 18 14:15:00 crc kubenswrapper[4857]: W0318 14:15:00.533464 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f6c7144_f8b7_4b54_bd26_806157743e00.slice/crio-3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0 WatchSource:0}: Error finding container 3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0: Status 404 returned error can't find the container with id 3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0 Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.533511 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx"] Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.787065 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerStarted","Data":"4514bf64c81800f288b2e3ffcc2184c65a51e1ffa2a3d7c32e7fef29195ca6d1"} Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.787440 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerStarted","Data":"3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0"} Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.788720 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerStarted","Data":"d4ffb21c52e35eb88f40eac0490a25307fd9cad075e0f62af71172102fce835f"} Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.788786 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerStarted","Data":"188d5f2502593b8845e848c83c93f6efd733bdc9759fc847ad181afc882c6b1b"} Mar 18 14:15:00 crc kubenswrapper[4857]: I0318 14:15:00.900505 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98"] Mar 18 14:15:00 crc kubenswrapper[4857]: W0318 14:15:00.909446 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852eb59a_14cd_48b7_86ed_d25d1d7f7a09.slice/crio-99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9 WatchSource:0}: Error finding container 99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9: Status 404 returned error can't find the container with id 99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9 Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.800743 4857 generic.go:334] "Generic (PLEG): container finished" podID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerID="4514bf64c81800f288b2e3ffcc2184c65a51e1ffa2a3d7c32e7fef29195ca6d1" exitCode=0 Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.801102 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerDied","Data":"4514bf64c81800f288b2e3ffcc2184c65a51e1ffa2a3d7c32e7fef29195ca6d1"} Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.804007 4857 generic.go:334] "Generic (PLEG): container finished" podID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerID="d4ffb21c52e35eb88f40eac0490a25307fd9cad075e0f62af71172102fce835f" exitCode=0 Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.804051 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerDied","Data":"d4ffb21c52e35eb88f40eac0490a25307fd9cad075e0f62af71172102fce835f"} Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.805659 4857 generic.go:334] "Generic (PLEG): container finished" podID="852eb59a-14cd-48b7-86ed-d25d1d7f7a09" containerID="7d587065861c19a1692a67f5854e10cf1b4479642b9de329546d0066363c5da8" exitCode=0 Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.805698 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" event={"ID":"852eb59a-14cd-48b7-86ed-d25d1d7f7a09","Type":"ContainerDied","Data":"7d587065861c19a1692a67f5854e10cf1b4479642b9de329546d0066363c5da8"} Mar 18 14:15:01 crc kubenswrapper[4857]: I0318 14:15:01.805720 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" event={"ID":"852eb59a-14cd-48b7-86ed-d25d1d7f7a09","Type":"ContainerStarted","Data":"99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9"} Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.281485 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.284145 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.291350 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.325569 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372342 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg6rw\" (UniqueName: \"kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw\") pod \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372533 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume\") pod \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372588 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume\") pod \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\" (UID: \"852eb59a-14cd-48b7-86ed-d25d1d7f7a09\") " Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372859 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zjt\" (UniqueName: \"kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372931 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.372996 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.374149 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume" (OuterVolumeSpecName: "config-volume") pod "852eb59a-14cd-48b7-86ed-d25d1d7f7a09" (UID: "852eb59a-14cd-48b7-86ed-d25d1d7f7a09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.380260 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw" (OuterVolumeSpecName: "kube-api-access-mg6rw") pod "852eb59a-14cd-48b7-86ed-d25d1d7f7a09" (UID: "852eb59a-14cd-48b7-86ed-d25d1d7f7a09"). InnerVolumeSpecName "kube-api-access-mg6rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.390040 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "852eb59a-14cd-48b7-86ed-d25d1d7f7a09" (UID: "852eb59a-14cd-48b7-86ed-d25d1d7f7a09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475058 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88zjt\" (UniqueName: \"kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475166 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475279 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475771 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475923 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg6rw\" (UniqueName: \"kubernetes.io/projected/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-kube-api-access-mg6rw\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475952 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.475965 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/852eb59a-14cd-48b7-86ed-d25d1d7f7a09-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.476398 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.495357 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zjt\" (UniqueName: \"kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt\") pod \"redhat-operators-9pcr7\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.623220 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.863116 4857 generic.go:334] "Generic (PLEG): container finished" podID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerID="a51e0f6b3e998b36c628bf269c18236bef7c917b549b1c337cdeb07fe2dd00ad" exitCode=0 Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.863211 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerDied","Data":"a51e0f6b3e998b36c628bf269c18236bef7c917b549b1c337cdeb07fe2dd00ad"} Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.870640 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" event={"ID":"852eb59a-14cd-48b7-86ed-d25d1d7f7a09","Type":"ContainerDied","Data":"99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9"} Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.870698 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ee708b4a697a124185fb37d0f038236ae3cd7ba8d382f8e4b20c5126f8dbe9" Mar 18 14:15:03 crc kubenswrapper[4857]: I0318 14:15:03.870787 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98" Mar 18 14:15:04 crc kubenswrapper[4857]: I0318 14:15:04.047232 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:04 crc kubenswrapper[4857]: W0318 14:15:04.064840 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2609fb9b_2cdd_4700_a2c1_888556466d3b.slice/crio-caf923114b579bc8c377edf12c3d2b1cc507403f1ec88d3f461b08c6c4646ce8 WatchSource:0}: Error finding container caf923114b579bc8c377edf12c3d2b1cc507403f1ec88d3f461b08c6c4646ce8: Status 404 returned error can't find the container with id caf923114b579bc8c377edf12c3d2b1cc507403f1ec88d3f461b08c6c4646ce8 Mar 18 14:15:04 crc kubenswrapper[4857]: I0318 14:15:04.962911 4857 generic.go:334] "Generic (PLEG): container finished" podID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerID="87e91b0f7dabfc15f5552950357d3d19816dbce03a614ec4b7c1848e944eee29" exitCode=0 Mar 18 14:15:04 crc kubenswrapper[4857]: I0318 14:15:04.963348 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerDied","Data":"87e91b0f7dabfc15f5552950357d3d19816dbce03a614ec4b7c1848e944eee29"} Mar 18 14:15:05 crc kubenswrapper[4857]: I0318 14:15:05.002371 4857 generic.go:334] "Generic (PLEG): container finished" podID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerID="1259137a649cc22d37cbaef83151230d6b76c4f732f8cbc1dcd7afd8fba8af8d" exitCode=0 Mar 18 14:15:05 crc kubenswrapper[4857]: I0318 14:15:05.002505 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerDied","Data":"1259137a649cc22d37cbaef83151230d6b76c4f732f8cbc1dcd7afd8fba8af8d"} Mar 18 14:15:05 crc kubenswrapper[4857]: I0318 14:15:05.014626 4857 generic.go:334] "Generic (PLEG): container finished" podID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerID="16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac" exitCode=0 Mar 18 14:15:05 crc kubenswrapper[4857]: I0318 14:15:05.014694 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerDied","Data":"16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac"} Mar 18 14:15:05 crc kubenswrapper[4857]: I0318 14:15:05.014729 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerStarted","Data":"caf923114b579bc8c377edf12c3d2b1cc507403f1ec88d3f461b08c6c4646ce8"} Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.025135 4857 generic.go:334] "Generic (PLEG): container finished" podID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerID="6026e06d0c71cdc984913e747157bb579e0547c43b23e18c8c4400d6f02f2c70" exitCode=0 Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.025223 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerDied","Data":"6026e06d0c71cdc984913e747157bb579e0547c43b23e18c8c4400d6f02f2c70"} Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.677263 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.986520 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp85z\" (UniqueName: \"kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z\") pod \"98d7117e-e25b-4325-a0d7-31bc5930fd08\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.990071 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util\") pod \"98d7117e-e25b-4325-a0d7-31bc5930fd08\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.990189 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle\") pod \"98d7117e-e25b-4325-a0d7-31bc5930fd08\" (UID: \"98d7117e-e25b-4325-a0d7-31bc5930fd08\") " Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.992192 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle" (OuterVolumeSpecName: "bundle") pod "98d7117e-e25b-4325-a0d7-31bc5930fd08" (UID: "98d7117e-e25b-4325-a0d7-31bc5930fd08"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:06 crc kubenswrapper[4857]: I0318 14:15:06.994207 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z" (OuterVolumeSpecName: "kube-api-access-vp85z") pod "98d7117e-e25b-4325-a0d7-31bc5930fd08" (UID: "98d7117e-e25b-4325-a0d7-31bc5930fd08"). InnerVolumeSpecName "kube-api-access-vp85z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.001009 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util" (OuterVolumeSpecName: "util") pod "98d7117e-e25b-4325-a0d7-31bc5930fd08" (UID: "98d7117e-e25b-4325-a0d7-31bc5930fd08"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.040168 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.040211 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z" event={"ID":"98d7117e-e25b-4325-a0d7-31bc5930fd08","Type":"ContainerDied","Data":"188d5f2502593b8845e848c83c93f6efd733bdc9759fc847ad181afc882c6b1b"} Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.040236 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188d5f2502593b8845e848c83c93f6efd733bdc9759fc847ad181afc882c6b1b" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.091895 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp85z\" (UniqueName: \"kubernetes.io/projected/98d7117e-e25b-4325-a0d7-31bc5930fd08-kube-api-access-vp85z\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.091932 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.091942 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98d7117e-e25b-4325-a0d7-31bc5930fd08-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.350706 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.497759 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle\") pod \"9f6c7144-f8b7-4b54-bd26-806157743e00\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.498432 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util\") pod \"9f6c7144-f8b7-4b54-bd26-806157743e00\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.498707 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-897lb\" (UniqueName: \"kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb\") pod \"9f6c7144-f8b7-4b54-bd26-806157743e00\" (UID: \"9f6c7144-f8b7-4b54-bd26-806157743e00\") " Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.499118 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle" (OuterVolumeSpecName: "bundle") pod "9f6c7144-f8b7-4b54-bd26-806157743e00" (UID: "9f6c7144-f8b7-4b54-bd26-806157743e00"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.500531 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.504359 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb" (OuterVolumeSpecName: "kube-api-access-897lb") pod "9f6c7144-f8b7-4b54-bd26-806157743e00" (UID: "9f6c7144-f8b7-4b54-bd26-806157743e00"). InnerVolumeSpecName "kube-api-access-897lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.508443 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util" (OuterVolumeSpecName: "util") pod "9f6c7144-f8b7-4b54-bd26-806157743e00" (UID: "9f6c7144-f8b7-4b54-bd26-806157743e00"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.602242 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f6c7144-f8b7-4b54-bd26-806157743e00-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:07 crc kubenswrapper[4857]: I0318 14:15:07.602295 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-897lb\" (UniqueName: \"kubernetes.io/projected/9f6c7144-f8b7-4b54-bd26-806157743e00-kube-api-access-897lb\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:08 crc kubenswrapper[4857]: I0318 14:15:08.052439 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" event={"ID":"9f6c7144-f8b7-4b54-bd26-806157743e00","Type":"ContainerDied","Data":"3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0"} Mar 18 14:15:08 crc kubenswrapper[4857]: I0318 14:15:08.052860 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5eecc6c8ac3c4ace8fa4342a734d4dd19e4b1a92df5d2a64f5fc1776d57ce0" Mar 18 14:15:08 crc kubenswrapper[4857]: I0318 14:15:08.052492 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx" Mar 18 14:15:08 crc kubenswrapper[4857]: I0318 14:15:08.056707 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerStarted","Data":"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565"} Mar 18 14:15:09 crc kubenswrapper[4857]: I0318 14:15:09.069243 4857 generic.go:334] "Generic (PLEG): container finished" podID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerID="90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565" exitCode=0 Mar 18 14:15:09 crc kubenswrapper[4857]: I0318 14:15:09.069304 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerDied","Data":"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565"} Mar 18 14:15:11 crc kubenswrapper[4857]: I0318 14:15:11.232140 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerStarted","Data":"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80"} Mar 18 14:15:11 crc kubenswrapper[4857]: I0318 14:15:11.262371 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9pcr7" podStartSLOduration=3.290052576 podStartE2EDuration="8.262324144s" podCreationTimestamp="2026-03-18 14:15:03 +0000 UTC" firstStartedPulling="2026-03-18 14:15:05.017227019 +0000 UTC m=+889.146355486" lastFinishedPulling="2026-03-18 14:15:09.989498597 +0000 UTC m=+894.118627054" observedRunningTime="2026-03-18 14:15:11.257116132 +0000 UTC m=+895.386244589" watchObservedRunningTime="2026-03-18 14:15:11.262324144 +0000 UTC m=+895.391452601" Mar 18 14:15:13 crc kubenswrapper[4857]: I0318 14:15:13.624199 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:13 crc kubenswrapper[4857]: I0318 14:15:13.624345 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:14 crc kubenswrapper[4857]: I0318 14:15:14.692988 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9pcr7" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="registry-server" probeResult="failure" output=< Mar 18 14:15:14 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:15:14 crc kubenswrapper[4857]: > Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.851714 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht"] Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852500 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852528 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852543 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852551 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852561 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="pull" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852569 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="pull" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852585 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="util" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852592 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="util" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852606 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852eb59a-14cd-48b7-86ed-d25d1d7f7a09" containerName="collect-profiles" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852613 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="852eb59a-14cd-48b7-86ed-d25d1d7f7a09" containerName="collect-profiles" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852627 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="util" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852633 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="util" Mar 18 14:15:17 crc kubenswrapper[4857]: E0318 14:15:17.852647 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="pull" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852654 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="pull" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852853 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="852eb59a-14cd-48b7-86ed-d25d1d7f7a09" containerName="collect-profiles" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852877 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d7117e-e25b-4325-a0d7-31bc5930fd08" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.852895 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6c7144-f8b7-4b54-bd26-806157743e00" containerName="extract" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.853851 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.867301 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.867366 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.867373 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.868155 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.868185 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.869067 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-z2mx2" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.876821 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht"] Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.938412 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-apiservice-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.938462 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-webhook-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.938532 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpdz\" (UniqueName: \"kubernetes.io/projected/e5ba6b5a-524d-488a-9435-5fea2c394e6a-kube-api-access-cqpdz\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.938845 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e5ba6b5a-524d-488a-9435-5fea2c394e6a-manager-config\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:17 crc kubenswrapper[4857]: I0318 14:15:17.938955 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.040337 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqpdz\" (UniqueName: \"kubernetes.io/projected/e5ba6b5a-524d-488a-9435-5fea2c394e6a-kube-api-access-cqpdz\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.040411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e5ba6b5a-524d-488a-9435-5fea2c394e6a-manager-config\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.040440 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.040467 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-apiservice-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.040516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-webhook-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.041577 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/e5ba6b5a-524d-488a-9435-5fea2c394e6a-manager-config\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.046330 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-webhook-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.046590 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.051590 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e5ba6b5a-524d-488a-9435-5fea2c394e6a-apiservice-cert\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.065492 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqpdz\" (UniqueName: \"kubernetes.io/projected/e5ba6b5a-524d-488a-9435-5fea2c394e6a-kube-api-access-cqpdz\") pod \"loki-operator-controller-manager-86c8cb9b45-kxpht\" (UID: \"e5ba6b5a-524d-488a-9435-5fea2c394e6a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.170965 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:18 crc kubenswrapper[4857]: I0318 14:15:18.519332 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht"] Mar 18 14:15:19 crc kubenswrapper[4857]: I0318 14:15:19.297190 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" event={"ID":"e5ba6b5a-524d-488a-9435-5fea2c394e6a","Type":"ContainerStarted","Data":"bc93e6a03be58f9652034b7d983a42c8bdcaa04ad0a28c700b1e50d60e5d3be5"} Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.358299 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-wq7db"] Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.360835 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.367581 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-snvv8" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.367853 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.368897 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.374252 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-wq7db"] Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.634285 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh5m\" (UniqueName: \"kubernetes.io/projected/d0f7164f-530a-4171-9a18-cda5db7559c9-kube-api-access-vmh5m\") pod \"cluster-logging-operator-66689c4bbf-wq7db\" (UID: \"d0f7164f-530a-4171-9a18-cda5db7559c9\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.735782 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh5m\" (UniqueName: \"kubernetes.io/projected/d0f7164f-530a-4171-9a18-cda5db7559c9-kube-api-access-vmh5m\") pod \"cluster-logging-operator-66689c4bbf-wq7db\" (UID: \"d0f7164f-530a-4171-9a18-cda5db7559c9\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.755231 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh5m\" (UniqueName: \"kubernetes.io/projected/d0f7164f-530a-4171-9a18-cda5db7559c9-kube-api-access-vmh5m\") pod \"cluster-logging-operator-66689c4bbf-wq7db\" (UID: \"d0f7164f-530a-4171-9a18-cda5db7559c9\") " pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" Mar 18 14:15:21 crc kubenswrapper[4857]: I0318 14:15:21.984713 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" Mar 18 14:15:23 crc kubenswrapper[4857]: I0318 14:15:23.680181 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:23 crc kubenswrapper[4857]: I0318 14:15:23.729928 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:25 crc kubenswrapper[4857]: I0318 14:15:25.133973 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-66689c4bbf-wq7db"] Mar 18 14:15:25 crc kubenswrapper[4857]: I0318 14:15:25.354728 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" event={"ID":"d0f7164f-530a-4171-9a18-cda5db7559c9","Type":"ContainerStarted","Data":"2569092c6ec2577ee83cc42fdaa44e8f1a4f211ee25a96ef9d7ab8e2b368920c"} Mar 18 14:15:25 crc kubenswrapper[4857]: I0318 14:15:25.357228 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" event={"ID":"e5ba6b5a-524d-488a-9435-5fea2c394e6a","Type":"ContainerStarted","Data":"d7eb2ba47aff218c3dc3e64c69952ca1e5faf210df45f7e2804676102b539e6a"} Mar 18 14:15:26 crc kubenswrapper[4857]: I0318 14:15:26.658291 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:26 crc kubenswrapper[4857]: I0318 14:15:26.658854 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9pcr7" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="registry-server" containerID="cri-o://32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80" gracePeriod=2 Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.276794 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.439723 4857 generic.go:334] "Generic (PLEG): container finished" podID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerID="32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80" exitCode=0 Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.439796 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerDied","Data":"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80"} Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.439855 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9pcr7" event={"ID":"2609fb9b-2cdd-4700-a2c1-888556466d3b","Type":"ContainerDied","Data":"caf923114b579bc8c377edf12c3d2b1cc507403f1ec88d3f461b08c6c4646ce8"} Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.439900 4857 scope.go:117] "RemoveContainer" containerID="32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.439964 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9pcr7" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.462692 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88zjt\" (UniqueName: \"kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt\") pod \"2609fb9b-2cdd-4700-a2c1-888556466d3b\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.462787 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities\") pod \"2609fb9b-2cdd-4700-a2c1-888556466d3b\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.462918 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content\") pod \"2609fb9b-2cdd-4700-a2c1-888556466d3b\" (UID: \"2609fb9b-2cdd-4700-a2c1-888556466d3b\") " Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.464584 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities" (OuterVolumeSpecName: "utilities") pod "2609fb9b-2cdd-4700-a2c1-888556466d3b" (UID: "2609fb9b-2cdd-4700-a2c1-888556466d3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.468276 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt" (OuterVolumeSpecName: "kube-api-access-88zjt") pod "2609fb9b-2cdd-4700-a2c1-888556466d3b" (UID: "2609fb9b-2cdd-4700-a2c1-888556466d3b"). InnerVolumeSpecName "kube-api-access-88zjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.503201 4857 scope.go:117] "RemoveContainer" containerID="90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.536025 4857 scope.go:117] "RemoveContainer" containerID="16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.553295 4857 scope.go:117] "RemoveContainer" containerID="32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80" Mar 18 14:15:27 crc kubenswrapper[4857]: E0318 14:15:27.554801 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80\": container with ID starting with 32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80 not found: ID does not exist" containerID="32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.554868 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80"} err="failed to get container status \"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80\": rpc error: code = NotFound desc = could not find container \"32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80\": container with ID starting with 32933d0ff4ef4d558891818fadc1a64aba82ed055d38c5b35f097aee9c9f5d80 not found: ID does not exist" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.554910 4857 scope.go:117] "RemoveContainer" containerID="90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565" Mar 18 14:15:27 crc kubenswrapper[4857]: E0318 14:15:27.555964 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565\": container with ID starting with 90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565 not found: ID does not exist" containerID="90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.556056 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565"} err="failed to get container status \"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565\": rpc error: code = NotFound desc = could not find container \"90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565\": container with ID starting with 90fd63663ddf9db273f3a69a3fc2a26d080a2e9665b087edee643a6284bf1565 not found: ID does not exist" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.556180 4857 scope.go:117] "RemoveContainer" containerID="16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac" Mar 18 14:15:27 crc kubenswrapper[4857]: E0318 14:15:27.556803 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac\": container with ID starting with 16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac not found: ID does not exist" containerID="16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.556847 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac"} err="failed to get container status \"16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac\": rpc error: code = NotFound desc = could not find container \"16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac\": container with ID starting with 16ed81da01958fac5a66bbcccee558bc0c9dea2ae0dbf754fc7ca908ef1a87ac not found: ID does not exist" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.564795 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88zjt\" (UniqueName: \"kubernetes.io/projected/2609fb9b-2cdd-4700-a2c1-888556466d3b-kube-api-access-88zjt\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.565274 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.596325 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2609fb9b-2cdd-4700-a2c1-888556466d3b" (UID: "2609fb9b-2cdd-4700-a2c1-888556466d3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.670526 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2609fb9b-2cdd-4700-a2c1-888556466d3b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.770844 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:27 crc kubenswrapper[4857]: I0318 14:15:27.776425 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9pcr7"] Mar 18 14:15:29 crc kubenswrapper[4857]: I0318 14:15:29.178804 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" path="/var/lib/kubelet/pods/2609fb9b-2cdd-4700-a2c1-888556466d3b/volumes" Mar 18 14:15:40 crc kubenswrapper[4857]: E0318 14:15:40.469222 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.20@sha256:883be225980cafa658d73b7d87ac99a39dce0fa8fb7754158ec28dc218bc903d" Mar 18 14:15:40 crc kubenswrapper[4857]: E0318 14:15:40.470064 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.20@sha256:883be225980cafa658d73b7d87ac99a39dce0fa8fb7754158ec28dc218bc903d,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --tls-cert-file=/var/run/secrets/serving-cert/tls.crt --tls-private-key-file=/var/run/secrets/serving-cert/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA256 --tls-min-version=VersionTLS12 --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:loki-operator.v6.2.9,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:loki-operator-metrics-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqpdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod loki-operator-controller-manager-86c8cb9b45-kxpht_openshift-operators-redhat(e5ba6b5a-524d-488a-9435-5fea2c394e6a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:15:40 crc kubenswrapper[4857]: E0318 14:15:40.471296 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" Mar 18 14:15:40 crc kubenswrapper[4857]: I0318 14:15:40.999605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" event={"ID":"d0f7164f-530a-4171-9a18-cda5db7559c9","Type":"ContainerStarted","Data":"28ef13139a4aead76738fab2fa9d75849cdc5b264fe5924d27fc4f5ee103a1c8"} Mar 18 14:15:40 crc kubenswrapper[4857]: I0318 14:15:40.999705 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:41 crc kubenswrapper[4857]: E0318 14:15:41.002316 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.20@sha256:883be225980cafa658d73b7d87ac99a39dce0fa8fb7754158ec28dc218bc903d\\\"\"" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" Mar 18 14:15:41 crc kubenswrapper[4857]: I0318 14:15:41.002695 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 14:15:41 crc kubenswrapper[4857]: I0318 14:15:41.290257 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-66689c4bbf-wq7db" podStartSLOduration=5.088904785 podStartE2EDuration="20.290225655s" podCreationTimestamp="2026-03-18 14:15:21 +0000 UTC" firstStartedPulling="2026-03-18 14:15:25.152315293 +0000 UTC m=+909.281443750" lastFinishedPulling="2026-03-18 14:15:40.353636163 +0000 UTC m=+924.482764620" observedRunningTime="2026-03-18 14:15:41.28492123 +0000 UTC m=+925.414049687" watchObservedRunningTime="2026-03-18 14:15:41.290225655 +0000 UTC m=+925.419354112" Mar 18 14:15:42 crc kubenswrapper[4857]: E0318 14:15:42.012235 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.20@sha256:883be225980cafa658d73b7d87ac99a39dce0fa8fb7754158ec28dc218bc903d\\\"\"" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" Mar 18 14:15:43 crc kubenswrapper[4857]: E0318 14:15:43.128044 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.20@sha256:883be225980cafa658d73b7d87ac99a39dce0fa8fb7754158ec28dc218bc903d\\\"\"" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.150275 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564056-fsd44"] Mar 18 14:16:00 crc kubenswrapper[4857]: E0318 14:16:00.151162 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="registry-server" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.151187 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="registry-server" Mar 18 14:16:00 crc kubenswrapper[4857]: E0318 14:16:00.151203 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="extract-utilities" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.151210 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="extract-utilities" Mar 18 14:16:00 crc kubenswrapper[4857]: E0318 14:16:00.151219 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="extract-content" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.151226 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="extract-content" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.151423 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2609fb9b-2cdd-4700-a2c1-888556466d3b" containerName="registry-server" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.152124 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.157110 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564056-fsd44"] Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.157330 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.157523 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.157718 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.245449 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkg2\" (UniqueName: \"kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2\") pod \"auto-csr-approver-29564056-fsd44\" (UID: \"351c3f0d-5c89-4db9-bb08-5e4853d56d69\") " pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.347428 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szkg2\" (UniqueName: \"kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2\") pod \"auto-csr-approver-29564056-fsd44\" (UID: \"351c3f0d-5c89-4db9-bb08-5e4853d56d69\") " pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.374441 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szkg2\" (UniqueName: \"kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2\") pod \"auto-csr-approver-29564056-fsd44\" (UID: \"351c3f0d-5c89-4db9-bb08-5e4853d56d69\") " pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:00 crc kubenswrapper[4857]: I0318 14:16:00.499349 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:01 crc kubenswrapper[4857]: I0318 14:16:01.132968 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564056-fsd44"] Mar 18 14:16:01 crc kubenswrapper[4857]: I0318 14:16:01.345437 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" event={"ID":"e5ba6b5a-524d-488a-9435-5fea2c394e6a","Type":"ContainerStarted","Data":"99d7ee880e06488eb837b36d20ac9391c9e05ab876fee92d210de751ff25da51"} Mar 18 14:16:01 crc kubenswrapper[4857]: I0318 14:16:01.346526 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564056-fsd44" event={"ID":"351c3f0d-5c89-4db9-bb08-5e4853d56d69","Type":"ContainerStarted","Data":"3dcf710ce2a857d57d3788164dde7bf3dbd4130f7ac0928e2e2f72d3ac546e1f"} Mar 18 14:16:01 crc kubenswrapper[4857]: I0318 14:16:01.373164 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podStartSLOduration=2.6711021280000002 podStartE2EDuration="44.373135674s" podCreationTimestamp="2026-03-18 14:15:17 +0000 UTC" firstStartedPulling="2026-03-18 14:15:18.55913527 +0000 UTC m=+902.688263727" lastFinishedPulling="2026-03-18 14:16:00.261168816 +0000 UTC m=+944.390297273" observedRunningTime="2026-03-18 14:16:01.372414146 +0000 UTC m=+945.501542623" watchObservedRunningTime="2026-03-18 14:16:01.373135674 +0000 UTC m=+945.502264131" Mar 18 14:16:04 crc kubenswrapper[4857]: I0318 14:16:04.485265 4857 generic.go:334] "Generic (PLEG): container finished" podID="351c3f0d-5c89-4db9-bb08-5e4853d56d69" containerID="28a8ce284a6e2d853c48ff4e2861d50443387eb7b17f6b4a6d65f5365c1c57ad" exitCode=0 Mar 18 14:16:04 crc kubenswrapper[4857]: I0318 14:16:04.485902 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564056-fsd44" event={"ID":"351c3f0d-5c89-4db9-bb08-5e4853d56d69","Type":"ContainerDied","Data":"28a8ce284a6e2d853c48ff4e2861d50443387eb7b17f6b4a6d65f5365c1c57ad"} Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.155254 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.156314 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.159029 4857 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-8s8l8" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.159443 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.159452 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.173287 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.398259 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbhxk\" (UniqueName: \"kubernetes.io/projected/2d5d247a-3432-4a7b-9b35-545716268c08-kube-api-access-tbhxk\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.398398 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.499706 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbhxk\" (UniqueName: \"kubernetes.io/projected/2d5d247a-3432-4a7b-9b35-545716268c08-kube-api-access-tbhxk\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.500195 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.504108 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.504166 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/efea62b300c5f50da251aefe2a5f2732d560eb1e677602a1a7bb744a1eb894de/globalmount\"" pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.529378 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbhxk\" (UniqueName: \"kubernetes.io/projected/2d5d247a-3432-4a7b-9b35-545716268c08-kube-api-access-tbhxk\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.571239 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97a286de-a68a-4ba0-a9b7-3f53d9604335\") pod \"minio\" (UID: \"2d5d247a-3432-4a7b-9b35-545716268c08\") " pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.610511 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.789111 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.910042 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szkg2\" (UniqueName: \"kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2\") pod \"351c3f0d-5c89-4db9-bb08-5e4853d56d69\" (UID: \"351c3f0d-5c89-4db9-bb08-5e4853d56d69\") " Mar 18 14:16:05 crc kubenswrapper[4857]: I0318 14:16:05.914402 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2" (OuterVolumeSpecName: "kube-api-access-szkg2") pod "351c3f0d-5c89-4db9-bb08-5e4853d56d69" (UID: "351c3f0d-5c89-4db9-bb08-5e4853d56d69"). InnerVolumeSpecName "kube-api-access-szkg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.012340 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szkg2\" (UniqueName: \"kubernetes.io/projected/351c3f0d-5c89-4db9-bb08-5e4853d56d69-kube-api-access-szkg2\") on node \"crc\" DevicePath \"\"" Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.082937 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 18 14:16:06 crc kubenswrapper[4857]: W0318 14:16:06.086807 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d5d247a_3432_4a7b_9b35_545716268c08.slice/crio-4831cdb87d826ab266cdfd5382c2dc028cf20d007eb323e6d95ea87e5caada81 WatchSource:0}: Error finding container 4831cdb87d826ab266cdfd5382c2dc028cf20d007eb323e6d95ea87e5caada81: Status 404 returned error can't find the container with id 4831cdb87d826ab266cdfd5382c2dc028cf20d007eb323e6d95ea87e5caada81 Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.504097 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2d5d247a-3432-4a7b-9b35-545716268c08","Type":"ContainerStarted","Data":"4831cdb87d826ab266cdfd5382c2dc028cf20d007eb323e6d95ea87e5caada81"} Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.506203 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564056-fsd44" event={"ID":"351c3f0d-5c89-4db9-bb08-5e4853d56d69","Type":"ContainerDied","Data":"3dcf710ce2a857d57d3788164dde7bf3dbd4130f7ac0928e2e2f72d3ac546e1f"} Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.506282 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dcf710ce2a857d57d3788164dde7bf3dbd4130f7ac0928e2e2f72d3ac546e1f" Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.506282 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564056-fsd44" Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.858899 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564050-gtkz8"] Mar 18 14:16:06 crc kubenswrapper[4857]: I0318 14:16:06.863155 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564050-gtkz8"] Mar 18 14:16:07 crc kubenswrapper[4857]: I0318 14:16:07.173525 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c30561-7c47-4b3f-a3e8-4ff8f0c486a0" path="/var/lib/kubelet/pods/94c30561-7c47-4b3f-a3e8-4ff8f0c486a0/volumes" Mar 18 14:16:11 crc kubenswrapper[4857]: I0318 14:16:11.553710 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2d5d247a-3432-4a7b-9b35-545716268c08","Type":"ContainerStarted","Data":"0547d9bdbf2528ef7b092af64c8acaa9bdbfe2960f236dbf3f08f84a0e0cb98f"} Mar 18 14:16:11 crc kubenswrapper[4857]: I0318 14:16:11.575662 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.741091786 podStartE2EDuration="9.575633182s" podCreationTimestamp="2026-03-18 14:16:02 +0000 UTC" firstStartedPulling="2026-03-18 14:16:06.089081242 +0000 UTC m=+950.218209699" lastFinishedPulling="2026-03-18 14:16:10.923622628 +0000 UTC m=+955.052751095" observedRunningTime="2026-03-18 14:16:11.570916132 +0000 UTC m=+955.700044609" watchObservedRunningTime="2026-03-18 14:16:11.575633182 +0000 UTC m=+955.704761649" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.698101 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj"] Mar 18 14:16:18 crc kubenswrapper[4857]: E0318 14:16:18.698688 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351c3f0d-5c89-4db9-bb08-5e4853d56d69" containerName="oc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.698702 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="351c3f0d-5c89-4db9-bb08-5e4853d56d69" containerName="oc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.698863 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="351c3f0d-5c89-4db9-bb08-5e4853d56d69" containerName="oc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.699368 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.702273 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.704172 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj"] Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.706772 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.707211 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.707439 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.707687 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-rj4fr" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.780322 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.780398 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.780424 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.780496 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm45r\" (UniqueName: \"kubernetes.io/projected/b4256ac3-3896-4c43-8d10-ca5ac43f4991-kube-api-access-hm45r\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.780549 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-config\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.839697 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f"] Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.840627 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.843728 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.844416 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.846834 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.854003 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f"] Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.882211 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-config\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.882644 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.882728 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.884870 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.885095 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-config\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.885100 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm45r\" (UniqueName: \"kubernetes.io/projected/b4256ac3-3896-4c43-8d10-ca5ac43f4991-kube-api-access-hm45r\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.886696 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-ca-bundle\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.894658 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.895579 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b4256ac3-3896-4c43-8d10-ca5ac43f4991-logging-loki-distributor-http\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.916703 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm45r\" (UniqueName: \"kubernetes.io/projected/b4256ac3-3896-4c43-8d10-ca5ac43f4991-kube-api-access-hm45r\") pod \"logging-loki-distributor-9c6b6d984-xjvbj\" (UID: \"b4256ac3-3896-4c43-8d10-ca5ac43f4991\") " pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.962050 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb"] Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.963338 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.966739 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 18 14:16:18 crc kubenswrapper[4857]: I0318 14:16:18.967051 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.114413 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.114414 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138287 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138383 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sfm\" (UniqueName: \"kubernetes.io/projected/64c46410-682b-49b0-9aa2-8f223a69165b-kube-api-access-b9sfm\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138426 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138453 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138485 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-config\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138506 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138537 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138566 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcln\" (UniqueName: \"kubernetes.io/projected/366a3cfc-7c2d-4212-a16d-2415868b12ba-kube-api-access-lqcln\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138591 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.138627 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-config\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.239450 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-bl8th"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.240293 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.240391 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.240451 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-config\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.240477 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241178 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241360 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241438 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcln\" (UniqueName: \"kubernetes.io/projected/366a3cfc-7c2d-4212-a16d-2415868b12ba-kube-api-access-lqcln\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241526 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241732 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-config\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.241898 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.242196 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.252925 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-config\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.253444 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-ca-bundle\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.258474 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-http\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.260939 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-s3\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270064 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270252 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270380 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270454 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270602 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.270734 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.278204 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366a3cfc-7c2d-4212-a16d-2415868b12ba-config\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.242285 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9sfm\" (UniqueName: \"kubernetes.io/projected/64c46410-682b-49b0-9aa2-8f223a69165b-kube-api-access-b9sfm\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.280095 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64c46410-682b-49b0-9aa2-8f223a69165b-logging-loki-querier-grpc\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.280163 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-bl8th"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.295600 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.298863 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9sfm\" (UniqueName: \"kubernetes.io/projected/64c46410-682b-49b0-9aa2-8f223a69165b-kube-api-access-b9sfm\") pod \"logging-loki-querier-6dcbdf8bb8-jp89f\" (UID: \"64c46410-682b-49b0-9aa2-8f223a69165b\") " pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.305349 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/366a3cfc-7c2d-4212-a16d-2415868b12ba-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.311944 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcln\" (UniqueName: \"kubernetes.io/projected/366a3cfc-7c2d-4212-a16d-2415868b12ba-kube-api-access-lqcln\") pod \"logging-loki-query-frontend-ff66c4dc9-82dsb\" (UID: \"366a3cfc-7c2d-4212-a16d-2415868b12ba\") " pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.314818 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.316157 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.331411 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-pmszg" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.347772 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380528 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380603 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380635 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380652 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgz5h\" (UniqueName: \"kubernetes.io/projected/206851e1-412e-4888-9635-f8eca5aa579e-kube-api-access-vgz5h\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380793 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tenants\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380842 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380861 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-rbac\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380911 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.380949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381011 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381044 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tenants\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381095 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-rbac\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381113 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381188 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.381234 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjjq\" (UniqueName: \"kubernetes.io/projected/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-kube-api-access-hxjjq\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.438267 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.459486 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.482898 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tenants\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.482954 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-rbac\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484299 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484325 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-rbac\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.482982 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484395 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484413 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484431 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjjq\" (UniqueName: \"kubernetes.io/projected/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-kube-api-access-hxjjq\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484485 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484509 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484544 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484571 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgz5h\" (UniqueName: \"kubernetes.io/projected/206851e1-412e-4888-9635-f8eca5aa579e-kube-api-access-vgz5h\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484601 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tenants\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484633 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484651 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-rbac\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484691 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484740 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.484811 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.485500 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: E0318 14:16:19.485597 4857 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 18 14:16:19 crc kubenswrapper[4857]: E0318 14:16:19.485676 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret podName:206851e1-412e-4888-9635-f8eca5aa579e nodeName:}" failed. No retries permitted until 2026-03-18 14:16:19.985640847 +0000 UTC m=+964.114769304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret") pod "logging-loki-gateway-fc6d448bf-w5jpj" (UID: "206851e1-412e-4888-9635-f8eca5aa579e") : secret "logging-loki-gateway-http" not found Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.486574 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-lokistack-gateway\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.489774 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: E0318 14:16:19.490285 4857 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.490344 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: E0318 14:16:19.490370 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret podName:9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e nodeName:}" failed. No retries permitted until 2026-03-18 14:16:19.990347657 +0000 UTC m=+964.119476114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret") pod "logging-loki-gateway-fc6d448bf-bl8th" (UID: "9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e") : secret "logging-loki-gateway-http" not found Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.490968 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.492501 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tenants\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.493447 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-rbac\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.496141 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tenants\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.496609 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.497202 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.507274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgz5h\" (UniqueName: \"kubernetes.io/projected/206851e1-412e-4888-9635-f8eca5aa579e-kube-api-access-vgz5h\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.509921 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjjq\" (UniqueName: \"kubernetes.io/projected/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-kube-api-access-hxjjq\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.602567 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.835530 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.836546 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.841007 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.841246 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.855442 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.932164 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.933086 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.935957 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.936293 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.956596 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962486 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962606 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962680 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962717 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962789 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grddm\" (UniqueName: \"kubernetes.io/projected/8fbde296-bf61-4d05-bf29-e27b5b58c150-kube-api-access-grddm\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962854 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962911 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:19 crc kubenswrapper[4857]: I0318 14:16:19.962951 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-config\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.064727 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-config\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.064838 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.064876 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.064914 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.064953 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ztjw\" (UniqueName: \"kubernetes.io/projected/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-kube-api-access-8ztjw\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065003 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065042 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065078 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065109 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grddm\" (UniqueName: \"kubernetes.io/projected/8fbde296-bf61-4d05-bf29-e27b5b58c150-kube-api-access-grddm\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065148 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065185 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065216 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065271 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065308 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065371 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065423 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-config\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.065459 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.066962 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-config\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.067111 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.068440 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.068482 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/910e346b188bf1c4441d39ea3857e3b97e8ce8101bed5c5614b663ff686f881a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.068845 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.069918 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.070109 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5d71117409e4900a71cb8fc0ecdc27b68ff54857cade46051ba899ffa42c3ae0/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.071832 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-bl8th\" (UID: \"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.076918 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.077478 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8fbde296-bf61-4d05-bf29-e27b5b58c150-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.077484 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/206851e1-412e-4888-9635-f8eca5aa579e-tls-secret\") pod \"logging-loki-gateway-fc6d448bf-w5jpj\" (UID: \"206851e1-412e-4888-9635-f8eca5aa579e\") " pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.082798 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grddm\" (UniqueName: \"kubernetes.io/projected/8fbde296-bf61-4d05-bf29-e27b5b58c150-kube-api-access-grddm\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.096210 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a3a5559a-70aa-49f4-9355-10339c24eb8e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.106370 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa917a11-7323-4ac7-a103-f4cc2701a09e\") pod \"logging-loki-ingester-0\" (UID: \"8fbde296-bf61-4d05-bf29-e27b5b58c150\") " pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.159695 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.160038 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.167887 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-config\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.167973 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.168013 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ztjw\" (UniqueName: \"kubernetes.io/projected/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-kube-api-access-8ztjw\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.168052 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.168110 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.168132 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.168183 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.169644 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-config\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.171893 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.183527 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.184550 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.186092 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.188822 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.194192 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.195142 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.202422 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.222136 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ztjw\" (UniqueName: \"kubernetes.io/projected/4da2f7e2-d9d9-42ff-b7b7-a129541ecc39-kube-api-access-8ztjw\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.237589 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.237648 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb6444c5e2ebf895c47229998511a8b0f965a94c4cada2b3be0b1420100ed28a/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.482730 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.483419 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.484967 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485064 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485125 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485179 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485256 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wcj4\" (UniqueName: \"kubernetes.io/projected/5081975d-5c3d-4788-b5e1-cd21e4fa3852-kube-api-access-8wcj4\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.485337 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.493063 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb"] Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.531204 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f"] Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588023 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wcj4\" (UniqueName: \"kubernetes.io/projected/5081975d-5c3d-4788-b5e1-cd21e4fa3852-kube-api-access-8wcj4\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588090 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588147 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588453 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588502 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588545 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.588580 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.592256 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-config\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.602921 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.604882 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.607800 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.607837 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a8496727a998b768f5aa66fc330ea6fbbd70d9efe7f06dc1666804f3a5f219ed/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.608139 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.615110 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/5081975d-5c3d-4788-b5e1-cd21e4fa3852-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.616075 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wcj4\" (UniqueName: \"kubernetes.io/projected/5081975d-5c3d-4788-b5e1-cd21e4fa3852-kube-api-access-8wcj4\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.646607 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7fe9e1-0f8a-4f57-b329-67b7e07a76bb\") pod \"logging-loki-compactor-0\" (UID: \"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39\") " pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.651039 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0b65898-2b59-4113-81c7-c4c3ba906e61\") pod \"logging-loki-index-gateway-0\" (UID: \"5081975d-5c3d-4788-b5e1-cd21e4fa3852\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.774455 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" event={"ID":"64c46410-682b-49b0-9aa2-8f223a69165b","Type":"ContainerStarted","Data":"012368159b19eb63796451737a29c0592c4a192fa99055d7df7d12fc40769edf"} Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.775343 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" event={"ID":"b4256ac3-3896-4c43-8d10-ca5ac43f4991","Type":"ContainerStarted","Data":"e07869965f2556e07610a800298425a6cb6575ae965179a7aea97c93473d1bbe"} Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.776194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" event={"ID":"366a3cfc-7c2d-4212-a16d-2415868b12ba","Type":"ContainerStarted","Data":"fd6b8ec8006fd7151de017e6977423659d6de47efce891b1e77c96302ab55b81"} Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.798298 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:20 crc kubenswrapper[4857]: I0318 14:16:20.953834 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.052994 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.081257 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj"] Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.098969 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-fc6d448bf-bl8th"] Mar 18 14:16:21 crc kubenswrapper[4857]: W0318 14:16:21.130427 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c9f048a_5cbb_4f1e_ac83_4ee827a48a0e.slice/crio-014eea6ac081c6cf1a5c8d340e9701ca2fd3a1a7801150899a478caba5c94f89 WatchSource:0}: Error finding container 014eea6ac081c6cf1a5c8d340e9701ca2fd3a1a7801150899a478caba5c94f89: Status 404 returned error can't find the container with id 014eea6ac081c6cf1a5c8d340e9701ca2fd3a1a7801150899a478caba5c94f89 Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.209860 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.458350 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 18 14:16:21 crc kubenswrapper[4857]: W0318 14:16:21.466396 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4da2f7e2_d9d9_42ff_b7b7_a129541ecc39.slice/crio-4dbb7986e3e4c81f3ecd6fbe147b94eb9f475665a878e24cea5430a86584d4d3 WatchSource:0}: Error finding container 4dbb7986e3e4c81f3ecd6fbe147b94eb9f475665a878e24cea5430a86584d4d3: Status 404 returned error can't find the container with id 4dbb7986e3e4c81f3ecd6fbe147b94eb9f475665a878e24cea5430a86584d4d3 Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.788222 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" event={"ID":"206851e1-412e-4888-9635-f8eca5aa579e","Type":"ContainerStarted","Data":"5664277623f1bb80546bbd7e72e255cf1f2e9ed12c47111329ef2f926cb5103a"} Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.790128 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" event={"ID":"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e","Type":"ContainerStarted","Data":"014eea6ac081c6cf1a5c8d340e9701ca2fd3a1a7801150899a478caba5c94f89"} Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.792219 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"8fbde296-bf61-4d05-bf29-e27b5b58c150","Type":"ContainerStarted","Data":"2cca2be9396c08be9f9904124ccdae01592d081f8fd570673ed4ece9f973ed38"} Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.793554 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5081975d-5c3d-4788-b5e1-cd21e4fa3852","Type":"ContainerStarted","Data":"973c691d01359885a96c50820e39bdd631d0043544f03ad16aafc0fc9527722f"} Mar 18 14:16:21 crc kubenswrapper[4857]: I0318 14:16:21.794657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39","Type":"ContainerStarted","Data":"4dbb7986e3e4c81f3ecd6fbe147b94eb9f475665a878e24cea5430a86584d4d3"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.816489 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" event={"ID":"64c46410-682b-49b0-9aa2-8f223a69165b","Type":"ContainerStarted","Data":"e57fbebaa425c53512ee3bc7b565dc7cf1ca7887c94a9b69a9a88fbb6ccd0c05"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.817024 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.819110 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"8fbde296-bf61-4d05-bf29-e27b5b58c150","Type":"ContainerStarted","Data":"b7fb98ab705709ab40d2b4330a4dbb665bae4bb82d8c40f0525664555ecf7b93"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.819329 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.821812 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"5081975d-5c3d-4788-b5e1-cd21e4fa3852","Type":"ContainerStarted","Data":"52282b4f5bef3b59edc4d6100d905c9802277206b8d794bb73e99c4ad86e18bf"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.823560 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.849817 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podStartSLOduration=3.06863357 podStartE2EDuration="5.849795066s" podCreationTimestamp="2026-03-18 14:16:18 +0000 UTC" firstStartedPulling="2026-03-18 14:16:20.601642277 +0000 UTC m=+964.730770734" lastFinishedPulling="2026-03-18 14:16:23.382803773 +0000 UTC m=+967.511932230" observedRunningTime="2026-03-18 14:16:23.831874801 +0000 UTC m=+967.961003268" watchObservedRunningTime="2026-03-18 14:16:23.849795066 +0000 UTC m=+967.978923523" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.851534 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" event={"ID":"b4256ac3-3896-4c43-8d10-ca5ac43f4991","Type":"ContainerStarted","Data":"d93a03d9bff03a9b58ea936efde08d952cdf06e2e0325c316eefc448e20f93d1"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.851712 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.861943 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" event={"ID":"366a3cfc-7c2d-4212-a16d-2415868b12ba","Type":"ContainerStarted","Data":"fbd3cb309f2d0f08fdd5e93686b546ae67589419ecf9aafbe601c21318d5906e"} Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.862062 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:23 crc kubenswrapper[4857]: I0318 14:16:23.872765 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.711599043 podStartE2EDuration="4.872733557s" podCreationTimestamp="2026-03-18 14:16:19 +0000 UTC" firstStartedPulling="2026-03-18 14:16:21.221806643 +0000 UTC m=+965.350935100" lastFinishedPulling="2026-03-18 14:16:23.382941157 +0000 UTC m=+967.512069614" observedRunningTime="2026-03-18 14:16:23.855368607 +0000 UTC m=+967.984497064" watchObservedRunningTime="2026-03-18 14:16:23.872733557 +0000 UTC m=+968.001862014" Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.031715 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4da2f7e2-d9d9-42ff-b7b7-a129541ecc39","Type":"ContainerStarted","Data":"2d08011421b4758b6134a85eea2b5ebbdd431859fa9b4779d3827995ca529a5b"} Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.032910 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.036906 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.729686803 podStartE2EDuration="6.03688169s" podCreationTimestamp="2026-03-18 14:16:18 +0000 UTC" firstStartedPulling="2026-03-18 14:16:21.07536677 +0000 UTC m=+965.204495227" lastFinishedPulling="2026-03-18 14:16:23.382561657 +0000 UTC m=+967.511690114" observedRunningTime="2026-03-18 14:16:24.033017592 +0000 UTC m=+968.162146049" watchObservedRunningTime="2026-03-18 14:16:24.03688169 +0000 UTC m=+968.166010147" Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.056386 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podStartSLOduration=2.506189557 podStartE2EDuration="6.056366984s" podCreationTimestamp="2026-03-18 14:16:18 +0000 UTC" firstStartedPulling="2026-03-18 14:16:19.770514821 +0000 UTC m=+963.899643288" lastFinishedPulling="2026-03-18 14:16:23.320692258 +0000 UTC m=+967.449820715" observedRunningTime="2026-03-18 14:16:24.055527393 +0000 UTC m=+968.184655850" watchObservedRunningTime="2026-03-18 14:16:24.056366984 +0000 UTC m=+968.185495441" Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.083414 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=4.171777663 podStartE2EDuration="6.083395159s" podCreationTimestamp="2026-03-18 14:16:18 +0000 UTC" firstStartedPulling="2026-03-18 14:16:21.469126585 +0000 UTC m=+965.598255042" lastFinishedPulling="2026-03-18 14:16:23.380744081 +0000 UTC m=+967.509872538" observedRunningTime="2026-03-18 14:16:24.081621664 +0000 UTC m=+968.210750121" watchObservedRunningTime="2026-03-18 14:16:24.083395159 +0000 UTC m=+968.212523616" Mar 18 14:16:24 crc kubenswrapper[4857]: I0318 14:16:24.103543 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podStartSLOduration=2.944561353 podStartE2EDuration="6.103518279s" podCreationTimestamp="2026-03-18 14:16:18 +0000 UTC" firstStartedPulling="2026-03-18 14:16:20.253068598 +0000 UTC m=+964.382197055" lastFinishedPulling="2026-03-18 14:16:23.412025524 +0000 UTC m=+967.541153981" observedRunningTime="2026-03-18 14:16:24.099316572 +0000 UTC m=+968.228445029" watchObservedRunningTime="2026-03-18 14:16:24.103518279 +0000 UTC m=+968.232646736" Mar 18 14:16:26 crc kubenswrapper[4857]: I0318 14:16:26.052132 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" event={"ID":"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e","Type":"ContainerStarted","Data":"d28f0657522d20863286a964594544dc77493e3a45a823fc29c73d20cd6b5521"} Mar 18 14:16:26 crc kubenswrapper[4857]: I0318 14:16:26.054933 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" event={"ID":"206851e1-412e-4888-9635-f8eca5aa579e","Type":"ContainerStarted","Data":"20405c2f345a210783c94df92254f9d68a4a9c27c378dcda49315c3262bd3168"} Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.073342 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" event={"ID":"206851e1-412e-4888-9635-f8eca5aa579e","Type":"ContainerStarted","Data":"208cd90f30ceaabe38e60246b15638128c6f1ce76dc57e261eb84392327ff0ce"} Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.074284 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.074313 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.086720 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.088428 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.108921 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podStartSLOduration=2.374070185 podStartE2EDuration="9.108895239s" podCreationTimestamp="2026-03-18 14:16:19 +0000 UTC" firstStartedPulling="2026-03-18 14:16:21.102690393 +0000 UTC m=+965.231818850" lastFinishedPulling="2026-03-18 14:16:27.837515437 +0000 UTC m=+971.966643904" observedRunningTime="2026-03-18 14:16:28.104997 +0000 UTC m=+972.234125457" watchObservedRunningTime="2026-03-18 14:16:28.108895239 +0000 UTC m=+972.238023696" Mar 18 14:16:28 crc kubenswrapper[4857]: I0318 14:16:28.848529 4857 scope.go:117] "RemoveContainer" containerID="c3000da080b60efe725aa03dd4a88c2301a95af190e4ee4f82ba75379c1f6764" Mar 18 14:16:31 crc kubenswrapper[4857]: I0318 14:16:31.103916 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" event={"ID":"9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e","Type":"ContainerStarted","Data":"c2ba189ccdf3d9c3f911edf83a7e8f3f439829f1f0c20b0dd905f9a73f878392"} Mar 18 14:16:31 crc kubenswrapper[4857]: I0318 14:16:31.104485 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:31 crc kubenswrapper[4857]: I0318 14:16:31.120706 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:31 crc kubenswrapper[4857]: I0318 14:16:31.141783 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podStartSLOduration=2.795812101 podStartE2EDuration="12.141735804s" podCreationTimestamp="2026-03-18 14:16:19 +0000 UTC" firstStartedPulling="2026-03-18 14:16:21.133988926 +0000 UTC m=+965.263117383" lastFinishedPulling="2026-03-18 14:16:30.479912629 +0000 UTC m=+974.609041086" observedRunningTime="2026-03-18 14:16:31.138043191 +0000 UTC m=+975.267171728" watchObservedRunningTime="2026-03-18 14:16:31.141735804 +0000 UTC m=+975.270864271" Mar 18 14:16:32 crc kubenswrapper[4857]: I0318 14:16:32.115448 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:32 crc kubenswrapper[4857]: I0318 14:16:32.134162 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" Mar 18 14:16:39 crc kubenswrapper[4857]: I0318 14:16:39.125512 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 14:16:39 crc kubenswrapper[4857]: I0318 14:16:39.447525 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 14:16:39 crc kubenswrapper[4857]: I0318 14:16:39.467981 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 14:16:40 crc kubenswrapper[4857]: I0318 14:16:40.173366 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 18 14:16:40 crc kubenswrapper[4857]: I0318 14:16:40.173469 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:16:40 crc kubenswrapper[4857]: I0318 14:16:40.807654 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 18 14:16:40 crc kubenswrapper[4857]: I0318 14:16:40.963177 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 18 14:16:50 crc kubenswrapper[4857]: I0318 14:16:50.169587 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 18 14:16:50 crc kubenswrapper[4857]: I0318 14:16:50.170374 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:16:57 crc kubenswrapper[4857]: I0318 14:16:57.039932 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:16:57 crc kubenswrapper[4857]: I0318 14:16:57.041240 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:17:00 crc kubenswrapper[4857]: I0318 14:17:00.170088 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 18 14:17:00 crc kubenswrapper[4857]: I0318 14:17:00.170998 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:17:10 crc kubenswrapper[4857]: I0318 14:17:10.169152 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 18 14:17:10 crc kubenswrapper[4857]: I0318 14:17:10.169525 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:17:20 crc kubenswrapper[4857]: I0318 14:17:20.166877 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 18 14:17:27 crc kubenswrapper[4857]: I0318 14:17:27.038428 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:17:27 crc kubenswrapper[4857]: I0318 14:17:27.039173 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.916505 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-fkvvk"] Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.920046 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.924373 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jlx7k" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.924608 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.924734 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.929046 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-fkvvk"] Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.936102 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.937083 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.940966 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.940978 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941058 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztst\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941099 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941126 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941146 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941208 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941289 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941312 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941340 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.941357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:37 crc kubenswrapper[4857]: I0318 14:17:37.996345 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fkvvk"] Mar 18 14:17:38 crc kubenswrapper[4857]: E0318 14:17:38.000289 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zztst metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-fkvvk" podUID="77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.042883 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.042969 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043009 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043036 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043113 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043151 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztst\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043198 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: E0318 14:17:38.043211 4857 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043229 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: E0318 14:17:38.043411 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver podName:77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc nodeName:}" failed. No retries permitted until 2026-03-18 14:17:38.543347651 +0000 UTC m=+1042.672476108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver") pod "collector-fkvvk" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc") : secret "collector-syslog-receiver" not found Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043447 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043561 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043615 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: E0318 14:17:38.043884 4857 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.043895 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: E0318 14:17:38.043953 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics podName:77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc nodeName:}" failed. No retries permitted until 2026-03-18 14:17:38.543931456 +0000 UTC m=+1042.673059913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics") pod "collector-fkvvk" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc") : secret "collector-metrics" not found Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.044742 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.044983 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.045349 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.045636 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.058198 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.061371 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.061457 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.063134 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztst\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.552364 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.553167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.562939 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:38 crc kubenswrapper[4857]: I0318 14:17:38.575316 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") pod \"collector-fkvvk\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " pod="openshift-logging/collector-fkvvk" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.148384 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fkvvk" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.210088 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fkvvk" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349363 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349447 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349525 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349593 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349668 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349780 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349831 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349890 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zztst\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349953 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.349981 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.350009 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config\") pod \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\" (UID: \"77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc\") " Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.350470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.350522 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir" (OuterVolumeSpecName: "datadir") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.350659 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.351417 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.352117 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config" (OuterVolumeSpecName: "config") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.354339 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token" (OuterVolumeSpecName: "sa-token") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.354543 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token" (OuterVolumeSpecName: "collector-token") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.356640 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics" (OuterVolumeSpecName: "metrics") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.357848 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.356455 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst" (OuterVolumeSpecName: "kube-api-access-zztst") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "kube-api-access-zztst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.356498 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp" (OuterVolumeSpecName: "tmp") pod "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" (UID: "77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451830 4857 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451880 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zztst\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-kube-api-access-zztst\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451895 4857 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-metrics\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451904 4857 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-collector-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451913 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451921 4857 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-tmp\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451930 4857 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-sa-token\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451938 4857 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451949 4857 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451961 4857 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-datadir\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:39 crc kubenswrapper[4857]: I0318 14:17:39.451970 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.158499 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-fkvvk" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.225354 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-fkvvk"] Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.235323 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-fkvvk"] Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.248926 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-8mmd4"] Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.250420 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.253779 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.255599 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.255907 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.256144 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-jlx7k" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.257002 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-entrypoint\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.257739 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config-openshift-service-cacrt\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.257955 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258054 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258082 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-tmp\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258158 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-datadir\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258221 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwzm\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-kube-api-access-8rwzm\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258279 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-syslog-receiver\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258337 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-trusted-ca\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258631 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-metrics\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.258811 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-sa-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.267133 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-8mmd4"] Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.268260 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.359911 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-entrypoint\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.359970 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config-openshift-service-cacrt\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360011 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360040 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360057 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-tmp\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360077 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-datadir\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360106 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwzm\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-kube-api-access-8rwzm\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360131 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-syslog-receiver\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360162 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-trusted-ca\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360194 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-metrics\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360215 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-sa-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.360797 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-datadir\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.361149 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config-openshift-service-cacrt\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.361244 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-entrypoint\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.362949 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-config\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.363175 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-trusted-ca\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.365110 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-metrics\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.365274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.374944 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-tmp\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.375349 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-collector-syslog-receiver\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.379238 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-sa-token\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.603452 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwzm\" (UniqueName: \"kubernetes.io/projected/5a9975f7-76d4-402f-aba1-0cd0c476aa9e-kube-api-access-8rwzm\") pod \"collector-8mmd4\" (UID: \"5a9975f7-76d4-402f-aba1-0cd0c476aa9e\") " pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.615305 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-8mmd4" Mar 18 14:17:40 crc kubenswrapper[4857]: I0318 14:17:40.874140 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-8mmd4"] Mar 18 14:17:40 crc kubenswrapper[4857]: W0318 14:17:40.885939 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a9975f7_76d4_402f_aba1_0cd0c476aa9e.slice/crio-67e4b97c4d10b3fa91a387ff64d496a639907876c23595da768d9346345d8756 WatchSource:0}: Error finding container 67e4b97c4d10b3fa91a387ff64d496a639907876c23595da768d9346345d8756: Status 404 returned error can't find the container with id 67e4b97c4d10b3fa91a387ff64d496a639907876c23595da768d9346345d8756 Mar 18 14:17:41 crc kubenswrapper[4857]: I0318 14:17:41.176659 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc" path="/var/lib/kubelet/pods/77fa7b56-0212-41dd-b1c8-1bdf98cc3dfc/volumes" Mar 18 14:17:41 crc kubenswrapper[4857]: I0318 14:17:41.177245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-8mmd4" event={"ID":"5a9975f7-76d4-402f-aba1-0cd0c476aa9e","Type":"ContainerStarted","Data":"67e4b97c4d10b3fa91a387ff64d496a639907876c23595da768d9346345d8756"} Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.082802 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.085717 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.118444 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.207456 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.207614 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.207951 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.309065 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.309138 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.309193 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.309780 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.309851 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.329033 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c\") pod \"redhat-marketplace-728sq\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.426230 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:42 crc kubenswrapper[4857]: I0318 14:17:42.878308 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:43 crc kubenswrapper[4857]: I0318 14:17:43.189098 4857 generic.go:334] "Generic (PLEG): container finished" podID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerID="b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a" exitCode=0 Mar 18 14:17:43 crc kubenswrapper[4857]: I0318 14:17:43.189244 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerDied","Data":"b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a"} Mar 18 14:17:43 crc kubenswrapper[4857]: I0318 14:17:43.189475 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerStarted","Data":"8723360455ff2edf41c91f8ffd36e883e73a7496e7ba1b8d01ea959ac0648b56"} Mar 18 14:17:45 crc kubenswrapper[4857]: I0318 14:17:45.205379 4857 generic.go:334] "Generic (PLEG): container finished" podID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerID="acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258" exitCode=0 Mar 18 14:17:45 crc kubenswrapper[4857]: I0318 14:17:45.205467 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerDied","Data":"acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258"} Mar 18 14:17:47 crc kubenswrapper[4857]: I0318 14:17:47.223538 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-8mmd4" event={"ID":"5a9975f7-76d4-402f-aba1-0cd0c476aa9e","Type":"ContainerStarted","Data":"cd4fa4c9fff86758c782bf2d8edadb721fa53c9442bcc0ca649e4457b7be18b7"} Mar 18 14:17:47 crc kubenswrapper[4857]: I0318 14:17:47.252974 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-8mmd4" podStartSLOduration=1.496224905 podStartE2EDuration="7.252939233s" podCreationTimestamp="2026-03-18 14:17:40 +0000 UTC" firstStartedPulling="2026-03-18 14:17:40.889851187 +0000 UTC m=+1045.018979664" lastFinishedPulling="2026-03-18 14:17:46.646565535 +0000 UTC m=+1050.775693992" observedRunningTime="2026-03-18 14:17:47.248111022 +0000 UTC m=+1051.377239479" watchObservedRunningTime="2026-03-18 14:17:47.252939233 +0000 UTC m=+1051.382067690" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.235272 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerStarted","Data":"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0"} Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.258637 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-728sq" podStartSLOduration=2.333120371 podStartE2EDuration="6.258617907s" podCreationTimestamp="2026-03-18 14:17:42 +0000 UTC" firstStartedPulling="2026-03-18 14:17:43.190659033 +0000 UTC m=+1047.319787500" lastFinishedPulling="2026-03-18 14:17:47.116156579 +0000 UTC m=+1051.245285036" observedRunningTime="2026-03-18 14:17:48.256445762 +0000 UTC m=+1052.385574219" watchObservedRunningTime="2026-03-18 14:17:48.258617907 +0000 UTC m=+1052.387746364" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.664455 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.672880 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.676192 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.809330 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.809477 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ncm9\" (UniqueName: \"kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.809565 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.911736 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.911840 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ncm9\" (UniqueName: \"kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.911896 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.912262 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.912336 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.933239 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ncm9\" (UniqueName: \"kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9\") pod \"community-operators-dlcpl\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:48 crc kubenswrapper[4857]: I0318 14:17:48.992209 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:49 crc kubenswrapper[4857]: I0318 14:17:49.630262 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:17:50 crc kubenswrapper[4857]: I0318 14:17:50.250533 4857 generic.go:334] "Generic (PLEG): container finished" podID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerID="780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9" exitCode=0 Mar 18 14:17:50 crc kubenswrapper[4857]: I0318 14:17:50.250690 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerDied","Data":"780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9"} Mar 18 14:17:50 crc kubenswrapper[4857]: I0318 14:17:50.250893 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerStarted","Data":"df16ededc1da5a4b866fd4ef4e9403093efa710f2ccbf184ecf14d85216df6b0"} Mar 18 14:17:51 crc kubenswrapper[4857]: I0318 14:17:51.260250 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerStarted","Data":"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4"} Mar 18 14:17:52 crc kubenswrapper[4857]: I0318 14:17:52.270733 4857 generic.go:334] "Generic (PLEG): container finished" podID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerID="8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4" exitCode=0 Mar 18 14:17:52 crc kubenswrapper[4857]: I0318 14:17:52.270798 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerDied","Data":"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4"} Mar 18 14:17:52 crc kubenswrapper[4857]: I0318 14:17:52.428038 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:52 crc kubenswrapper[4857]: I0318 14:17:52.428098 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:52 crc kubenswrapper[4857]: I0318 14:17:52.472626 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:53 crc kubenswrapper[4857]: I0318 14:17:53.282252 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerStarted","Data":"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4"} Mar 18 14:17:53 crc kubenswrapper[4857]: I0318 14:17:53.314996 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlcpl" podStartSLOduration=2.84763066 podStartE2EDuration="5.314972539s" podCreationTimestamp="2026-03-18 14:17:48 +0000 UTC" firstStartedPulling="2026-03-18 14:17:50.252070623 +0000 UTC m=+1054.381199080" lastFinishedPulling="2026-03-18 14:17:52.719412492 +0000 UTC m=+1056.848540959" observedRunningTime="2026-03-18 14:17:53.310555158 +0000 UTC m=+1057.439683635" watchObservedRunningTime="2026-03-18 14:17:53.314972539 +0000 UTC m=+1057.444100996" Mar 18 14:17:53 crc kubenswrapper[4857]: I0318 14:17:53.344206 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:54 crc kubenswrapper[4857]: I0318 14:17:54.925874 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:55 crc kubenswrapper[4857]: I0318 14:17:55.297635 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-728sq" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="registry-server" containerID="cri-o://f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0" gracePeriod=2 Mar 18 14:17:55 crc kubenswrapper[4857]: I0318 14:17:55.971026 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.137863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content\") pod \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.138003 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities\") pod \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.138087 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c\") pod \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\" (UID: \"f32ce874-4951-47c7-ac16-f9b3c87e1abd\") " Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.139063 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities" (OuterVolumeSpecName: "utilities") pod "f32ce874-4951-47c7-ac16-f9b3c87e1abd" (UID: "f32ce874-4951-47c7-ac16-f9b3c87e1abd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.139311 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.155425 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c" (OuterVolumeSpecName: "kube-api-access-gr45c") pod "f32ce874-4951-47c7-ac16-f9b3c87e1abd" (UID: "f32ce874-4951-47c7-ac16-f9b3c87e1abd"). InnerVolumeSpecName "kube-api-access-gr45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.185573 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f32ce874-4951-47c7-ac16-f9b3c87e1abd" (UID: "f32ce874-4951-47c7-ac16-f9b3c87e1abd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.241061 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f32ce874-4951-47c7-ac16-f9b3c87e1abd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.241107 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/f32ce874-4951-47c7-ac16-f9b3c87e1abd-kube-api-access-gr45c\") on node \"crc\" DevicePath \"\"" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.309854 4857 generic.go:334] "Generic (PLEG): container finished" podID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerID="f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0" exitCode=0 Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.309947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerDied","Data":"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0"} Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.309996 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-728sq" event={"ID":"f32ce874-4951-47c7-ac16-f9b3c87e1abd","Type":"ContainerDied","Data":"8723360455ff2edf41c91f8ffd36e883e73a7496e7ba1b8d01ea959ac0648b56"} Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.310067 4857 scope.go:117] "RemoveContainer" containerID="f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.310304 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-728sq" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.335142 4857 scope.go:117] "RemoveContainer" containerID="acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.546011 4857 scope.go:117] "RemoveContainer" containerID="b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.553932 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.559888 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-728sq"] Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.596094 4857 scope.go:117] "RemoveContainer" containerID="f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0" Mar 18 14:17:56 crc kubenswrapper[4857]: E0318 14:17:56.597853 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0\": container with ID starting with f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0 not found: ID does not exist" containerID="f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.597923 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0"} err="failed to get container status \"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0\": rpc error: code = NotFound desc = could not find container \"f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0\": container with ID starting with f59c46b66475dd27a76dbc86904cc07803885c8169116e7ec68575eff5cfb1a0 not found: ID does not exist" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.597964 4857 scope.go:117] "RemoveContainer" containerID="acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258" Mar 18 14:17:56 crc kubenswrapper[4857]: E0318 14:17:56.598477 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258\": container with ID starting with acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258 not found: ID does not exist" containerID="acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.598528 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258"} err="failed to get container status \"acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258\": rpc error: code = NotFound desc = could not find container \"acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258\": container with ID starting with acc4704d6f1b75b6e3dadb4b6cfe0770e22d7621658ac31c1a3e49dd05026258 not found: ID does not exist" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.598564 4857 scope.go:117] "RemoveContainer" containerID="b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a" Mar 18 14:17:56 crc kubenswrapper[4857]: E0318 14:17:56.598911 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a\": container with ID starting with b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a not found: ID does not exist" containerID="b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a" Mar 18 14:17:56 crc kubenswrapper[4857]: I0318 14:17:56.598940 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a"} err="failed to get container status \"b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a\": rpc error: code = NotFound desc = could not find container \"b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a\": container with ID starting with b272f874bd8eecbb30fe1c079cd9785c24c1b96e7af7d0a8831987576ba7bf0a not found: ID does not exist" Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.038966 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.039096 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.039261 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.040873 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.041029 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65" gracePeriod=600 Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.182942 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" path="/var/lib/kubelet/pods/f32ce874-4951-47c7-ac16-f9b3c87e1abd/volumes" Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.324473 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65" exitCode=0 Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.324565 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65"} Mar 18 14:17:57 crc kubenswrapper[4857]: I0318 14:17:57.324644 4857 scope.go:117] "RemoveContainer" containerID="3631c04bd75be7a7fcee1c0f3130eafd7172f74de9e0ccca7d8c5f516f3e8d18" Mar 18 14:17:58 crc kubenswrapper[4857]: I0318 14:17:58.339739 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9"} Mar 18 14:17:58 crc kubenswrapper[4857]: I0318 14:17:58.992497 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:58 crc kubenswrapper[4857]: I0318 14:17:58.992990 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:59 crc kubenswrapper[4857]: I0318 14:17:59.039673 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:59 crc kubenswrapper[4857]: I0318 14:17:59.550308 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:17:59 crc kubenswrapper[4857]: I0318 14:17:59.875572 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.158875 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564058-vgn9l"] Mar 18 14:18:00 crc kubenswrapper[4857]: E0318 14:18:00.159265 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="extract-content" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.159302 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="extract-content" Mar 18 14:18:00 crc kubenswrapper[4857]: E0318 14:18:00.159319 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="registry-server" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.159327 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="registry-server" Mar 18 14:18:00 crc kubenswrapper[4857]: E0318 14:18:00.159349 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="extract-utilities" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.159357 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="extract-utilities" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.159587 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32ce874-4951-47c7-ac16-f9b3c87e1abd" containerName="registry-server" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.160286 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.163355 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.169083 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.169222 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564058-vgn9l"] Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.169350 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.307171 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqcml\" (UniqueName: \"kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml\") pod \"auto-csr-approver-29564058-vgn9l\" (UID: \"3333def7-bf08-47f6-9e48-06c0f6adb7ef\") " pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.409407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqcml\" (UniqueName: \"kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml\") pod \"auto-csr-approver-29564058-vgn9l\" (UID: \"3333def7-bf08-47f6-9e48-06c0f6adb7ef\") " pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.433861 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqcml\" (UniqueName: \"kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml\") pod \"auto-csr-approver-29564058-vgn9l\" (UID: \"3333def7-bf08-47f6-9e48-06c0f6adb7ef\") " pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:00 crc kubenswrapper[4857]: I0318 14:18:00.491543 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:01 crc kubenswrapper[4857]: I0318 14:18:01.074019 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564058-vgn9l"] Mar 18 14:18:01 crc kubenswrapper[4857]: W0318 14:18:01.079096 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3333def7_bf08_47f6_9e48_06c0f6adb7ef.slice/crio-e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc WatchSource:0}: Error finding container e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc: Status 404 returned error can't find the container with id e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc Mar 18 14:18:01 crc kubenswrapper[4857]: I0318 14:18:01.517902 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" event={"ID":"3333def7-bf08-47f6-9e48-06c0f6adb7ef","Type":"ContainerStarted","Data":"e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc"} Mar 18 14:18:01 crc kubenswrapper[4857]: I0318 14:18:01.518069 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlcpl" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="registry-server" containerID="cri-o://21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4" gracePeriod=2 Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.022724 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.070585 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities\") pod \"73b6458e-bbff-475f-9ea9-8e14642c2670\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.070827 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ncm9\" (UniqueName: \"kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9\") pod \"73b6458e-bbff-475f-9ea9-8e14642c2670\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.070989 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content\") pod \"73b6458e-bbff-475f-9ea9-8e14642c2670\" (UID: \"73b6458e-bbff-475f-9ea9-8e14642c2670\") " Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.072831 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities" (OuterVolumeSpecName: "utilities") pod "73b6458e-bbff-475f-9ea9-8e14642c2670" (UID: "73b6458e-bbff-475f-9ea9-8e14642c2670"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.083162 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9" (OuterVolumeSpecName: "kube-api-access-2ncm9") pod "73b6458e-bbff-475f-9ea9-8e14642c2670" (UID: "73b6458e-bbff-475f-9ea9-8e14642c2670"). InnerVolumeSpecName "kube-api-access-2ncm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.172584 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.172936 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ncm9\" (UniqueName: \"kubernetes.io/projected/73b6458e-bbff-475f-9ea9-8e14642c2670-kube-api-access-2ncm9\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.302822 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b6458e-bbff-475f-9ea9-8e14642c2670" (UID: "73b6458e-bbff-475f-9ea9-8e14642c2670"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.376435 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b6458e-bbff-475f-9ea9-8e14642c2670-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.527199 4857 generic.go:334] "Generic (PLEG): container finished" podID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerID="21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4" exitCode=0 Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.527279 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerDied","Data":"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4"} Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.527312 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlcpl" event={"ID":"73b6458e-bbff-475f-9ea9-8e14642c2670","Type":"ContainerDied","Data":"df16ededc1da5a4b866fd4ef4e9403093efa710f2ccbf184ecf14d85216df6b0"} Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.527334 4857 scope.go:117] "RemoveContainer" containerID="21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.527380 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlcpl" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.530574 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" event={"ID":"3333def7-bf08-47f6-9e48-06c0f6adb7ef","Type":"ContainerStarted","Data":"abb983acc94350e27db98b6ff12909c6f384ade6476def74aa3724215ed54d39"} Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.547145 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" podStartSLOduration=1.491167049 podStartE2EDuration="2.547118219s" podCreationTimestamp="2026-03-18 14:18:00 +0000 UTC" firstStartedPulling="2026-03-18 14:18:01.081186436 +0000 UTC m=+1065.210314893" lastFinishedPulling="2026-03-18 14:18:02.137137606 +0000 UTC m=+1066.266266063" observedRunningTime="2026-03-18 14:18:02.545668943 +0000 UTC m=+1066.674797420" watchObservedRunningTime="2026-03-18 14:18:02.547118219 +0000 UTC m=+1066.676246676" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.549884 4857 scope.go:117] "RemoveContainer" containerID="8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.574078 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.581052 4857 scope.go:117] "RemoveContainer" containerID="780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.587534 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlcpl"] Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.607221 4857 scope.go:117] "RemoveContainer" containerID="21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4" Mar 18 14:18:02 crc kubenswrapper[4857]: E0318 14:18:02.607772 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4\": container with ID starting with 21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4 not found: ID does not exist" containerID="21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.607818 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4"} err="failed to get container status \"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4\": rpc error: code = NotFound desc = could not find container \"21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4\": container with ID starting with 21dce21d75a0fe9406bbf5363597d7eccfc6ef715a6da5fea0e57df1aa3df2b4 not found: ID does not exist" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.607853 4857 scope.go:117] "RemoveContainer" containerID="8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4" Mar 18 14:18:02 crc kubenswrapper[4857]: E0318 14:18:02.608336 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4\": container with ID starting with 8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4 not found: ID does not exist" containerID="8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.608378 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4"} err="failed to get container status \"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4\": rpc error: code = NotFound desc = could not find container \"8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4\": container with ID starting with 8144b148055936bbc2cd67b6e978553f81179a5b614d2b2de1363a64d0775ca4 not found: ID does not exist" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.608410 4857 scope.go:117] "RemoveContainer" containerID="780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9" Mar 18 14:18:02 crc kubenswrapper[4857]: E0318 14:18:02.608704 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9\": container with ID starting with 780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9 not found: ID does not exist" containerID="780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9" Mar 18 14:18:02 crc kubenswrapper[4857]: I0318 14:18:02.608741 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9"} err="failed to get container status \"780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9\": rpc error: code = NotFound desc = could not find container \"780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9\": container with ID starting with 780f0e0504a17e46bf189fcef86297198ef6adaf3754bc2b771afec1f44853c9 not found: ID does not exist" Mar 18 14:18:03 crc kubenswrapper[4857]: I0318 14:18:03.178716 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" path="/var/lib/kubelet/pods/73b6458e-bbff-475f-9ea9-8e14642c2670/volumes" Mar 18 14:18:03 crc kubenswrapper[4857]: I0318 14:18:03.539732 4857 generic.go:334] "Generic (PLEG): container finished" podID="3333def7-bf08-47f6-9e48-06c0f6adb7ef" containerID="abb983acc94350e27db98b6ff12909c6f384ade6476def74aa3724215ed54d39" exitCode=0 Mar 18 14:18:03 crc kubenswrapper[4857]: I0318 14:18:03.539799 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" event={"ID":"3333def7-bf08-47f6-9e48-06c0f6adb7ef","Type":"ContainerDied","Data":"abb983acc94350e27db98b6ff12909c6f384ade6476def74aa3724215ed54d39"} Mar 18 14:18:04 crc kubenswrapper[4857]: I0318 14:18:04.884573 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:04 crc kubenswrapper[4857]: I0318 14:18:04.925166 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqcml\" (UniqueName: \"kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml\") pod \"3333def7-bf08-47f6-9e48-06c0f6adb7ef\" (UID: \"3333def7-bf08-47f6-9e48-06c0f6adb7ef\") " Mar 18 14:18:04 crc kubenswrapper[4857]: I0318 14:18:04.933613 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml" (OuterVolumeSpecName: "kube-api-access-zqcml") pod "3333def7-bf08-47f6-9e48-06c0f6adb7ef" (UID: "3333def7-bf08-47f6-9e48-06c0f6adb7ef"). InnerVolumeSpecName "kube-api-access-zqcml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.026730 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqcml\" (UniqueName: \"kubernetes.io/projected/3333def7-bf08-47f6-9e48-06c0f6adb7ef-kube-api-access-zqcml\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.557590 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" event={"ID":"3333def7-bf08-47f6-9e48-06c0f6adb7ef","Type":"ContainerDied","Data":"e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc"} Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.557951 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46ea949191be849a4c314fed1f9c96d9f29aaf32bdc033c7c764c2359ace0cc" Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.557842 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564058-vgn9l" Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.607413 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564052-hpjwq"] Mar 18 14:18:05 crc kubenswrapper[4857]: I0318 14:18:05.616579 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564052-hpjwq"] Mar 18 14:18:07 crc kubenswrapper[4857]: I0318 14:18:07.180198 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29bc5ba3-9c2c-486a-b75f-ff4a4b59e231" path="/var/lib/kubelet/pods/29bc5ba3-9c2c-486a-b75f-ff4a4b59e231/volumes" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.849682 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv"] Mar 18 14:18:20 crc kubenswrapper[4857]: E0318 14:18:20.850799 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="extract-content" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.850836 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="extract-content" Mar 18 14:18:20 crc kubenswrapper[4857]: E0318 14:18:20.850868 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="registry-server" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.850879 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="registry-server" Mar 18 14:18:20 crc kubenswrapper[4857]: E0318 14:18:20.850901 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3333def7-bf08-47f6-9e48-06c0f6adb7ef" containerName="oc" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.850913 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3333def7-bf08-47f6-9e48-06c0f6adb7ef" containerName="oc" Mar 18 14:18:20 crc kubenswrapper[4857]: E0318 14:18:20.850944 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="extract-utilities" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.850953 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="extract-utilities" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.851148 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3333def7-bf08-47f6-9e48-06c0f6adb7ef" containerName="oc" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.851175 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b6458e-bbff-475f-9ea9-8e14642c2670" containerName="registry-server" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.866485 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.876340 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.889723 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv"] Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.958015 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqsv\" (UniqueName: \"kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.958111 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:20 crc kubenswrapper[4857]: I0318 14:18:20.958147 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.059502 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqsv\" (UniqueName: \"kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.059577 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.059603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.060191 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.060879 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.082605 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqsv\" (UniqueName: \"kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.202348 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:21 crc kubenswrapper[4857]: I0318 14:18:21.699491 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv"] Mar 18 14:18:22 crc kubenswrapper[4857]: I0318 14:18:22.711022 4857 generic.go:334] "Generic (PLEG): container finished" podID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerID="21510682f846979ace24d76abed510f2f51ec0fb3584f3ce58ec025fe64d6d85" exitCode=0 Mar 18 14:18:22 crc kubenswrapper[4857]: I0318 14:18:22.711215 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" event={"ID":"a1a8a67d-e6ff-4782-8f41-b2481e0b5299","Type":"ContainerDied","Data":"21510682f846979ace24d76abed510f2f51ec0fb3584f3ce58ec025fe64d6d85"} Mar 18 14:18:22 crc kubenswrapper[4857]: I0318 14:18:22.711319 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" event={"ID":"a1a8a67d-e6ff-4782-8f41-b2481e0b5299","Type":"ContainerStarted","Data":"0e6b8248993566223f56504e308097322292ac37052e3c659a89637350933061"} Mar 18 14:18:25 crc kubenswrapper[4857]: I0318 14:18:25.740443 4857 generic.go:334] "Generic (PLEG): container finished" podID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerID="c38599abe8b8172875e041be04bb2a939c5d6bfec53a990284224e2460af05df" exitCode=0 Mar 18 14:18:25 crc kubenswrapper[4857]: I0318 14:18:25.740511 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" event={"ID":"a1a8a67d-e6ff-4782-8f41-b2481e0b5299","Type":"ContainerDied","Data":"c38599abe8b8172875e041be04bb2a939c5d6bfec53a990284224e2460af05df"} Mar 18 14:18:26 crc kubenswrapper[4857]: I0318 14:18:26.750944 4857 generic.go:334] "Generic (PLEG): container finished" podID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerID="506f6cb219c90417aede75779c75cab21e7bf53bb50f02f713e025cf72f8a96a" exitCode=0 Mar 18 14:18:26 crc kubenswrapper[4857]: I0318 14:18:26.751007 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" event={"ID":"a1a8a67d-e6ff-4782-8f41-b2481e0b5299","Type":"ContainerDied","Data":"506f6cb219c90417aede75779c75cab21e7bf53bb50f02f713e025cf72f8a96a"} Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.135997 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.289955 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle\") pod \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.290020 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jqsv\" (UniqueName: \"kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv\") pod \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.290214 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util\") pod \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\" (UID: \"a1a8a67d-e6ff-4782-8f41-b2481e0b5299\") " Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.291286 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle" (OuterVolumeSpecName: "bundle") pod "a1a8a67d-e6ff-4782-8f41-b2481e0b5299" (UID: "a1a8a67d-e6ff-4782-8f41-b2481e0b5299"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.303014 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv" (OuterVolumeSpecName: "kube-api-access-6jqsv") pod "a1a8a67d-e6ff-4782-8f41-b2481e0b5299" (UID: "a1a8a67d-e6ff-4782-8f41-b2481e0b5299"). InnerVolumeSpecName "kube-api-access-6jqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.306569 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util" (OuterVolumeSpecName: "util") pod "a1a8a67d-e6ff-4782-8f41-b2481e0b5299" (UID: "a1a8a67d-e6ff-4782-8f41-b2481e0b5299"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.392857 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.393310 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.393330 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jqsv\" (UniqueName: \"kubernetes.io/projected/a1a8a67d-e6ff-4782-8f41-b2481e0b5299-kube-api-access-6jqsv\") on node \"crc\" DevicePath \"\"" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.767944 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" event={"ID":"a1a8a67d-e6ff-4782-8f41-b2481e0b5299","Type":"ContainerDied","Data":"0e6b8248993566223f56504e308097322292ac37052e3c659a89637350933061"} Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.768026 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6b8248993566223f56504e308097322292ac37052e3c659a89637350933061" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.768053 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv" Mar 18 14:18:28 crc kubenswrapper[4857]: I0318 14:18:28.990950 4857 scope.go:117] "RemoveContainer" containerID="a8bb763f8f8a08d3e6bfdfce69b5ebb116fe2f6bf550318769c53bf6c87c9686" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.768657 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h"] Mar 18 14:18:32 crc kubenswrapper[4857]: E0318 14:18:32.769636 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="pull" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.769669 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="pull" Mar 18 14:18:32 crc kubenswrapper[4857]: E0318 14:18:32.769692 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="extract" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.769699 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="extract" Mar 18 14:18:32 crc kubenswrapper[4857]: E0318 14:18:32.769729 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="util" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.769736 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="util" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.769919 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a8a67d-e6ff-4782-8f41-b2481e0b5299" containerName="extract" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.770743 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.773510 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.773848 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rkhlq" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.774252 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.779648 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h"] Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.863500 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdl5\" (UniqueName: \"kubernetes.io/projected/ac94e571-ed34-4042-8c90-f2f582d58b5e-kube-api-access-vrdl5\") pod \"nmstate-operator-796d4cfff4-bjm7h\" (UID: \"ac94e571-ed34-4042-8c90-f2f582d58b5e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" Mar 18 14:18:32 crc kubenswrapper[4857]: I0318 14:18:32.965111 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrdl5\" (UniqueName: \"kubernetes.io/projected/ac94e571-ed34-4042-8c90-f2f582d58b5e-kube-api-access-vrdl5\") pod \"nmstate-operator-796d4cfff4-bjm7h\" (UID: \"ac94e571-ed34-4042-8c90-f2f582d58b5e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" Mar 18 14:18:33 crc kubenswrapper[4857]: I0318 14:18:33.002870 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrdl5\" (UniqueName: \"kubernetes.io/projected/ac94e571-ed34-4042-8c90-f2f582d58b5e-kube-api-access-vrdl5\") pod \"nmstate-operator-796d4cfff4-bjm7h\" (UID: \"ac94e571-ed34-4042-8c90-f2f582d58b5e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" Mar 18 14:18:33 crc kubenswrapper[4857]: I0318 14:18:33.086103 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" Mar 18 14:18:33 crc kubenswrapper[4857]: I0318 14:18:33.365586 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h"] Mar 18 14:18:33 crc kubenswrapper[4857]: I0318 14:18:33.822504 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" event={"ID":"ac94e571-ed34-4042-8c90-f2f582d58b5e","Type":"ContainerStarted","Data":"99968cae7eea8f9346f41c644728fe109e3cc8ca323aba88384cb51d95d81acf"} Mar 18 14:18:40 crc kubenswrapper[4857]: I0318 14:18:40.064390 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" event={"ID":"ac94e571-ed34-4042-8c90-f2f582d58b5e","Type":"ContainerStarted","Data":"ea18a8002de9950beaa94208dcceba9e12321125d9fcb7467d89c5a775eac938"} Mar 18 14:18:40 crc kubenswrapper[4857]: I0318 14:18:40.093187 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bjm7h" podStartSLOduration=2.109961182 podStartE2EDuration="8.093082129s" podCreationTimestamp="2026-03-18 14:18:32 +0000 UTC" firstStartedPulling="2026-03-18 14:18:33.391022606 +0000 UTC m=+1097.520151083" lastFinishedPulling="2026-03-18 14:18:39.374143563 +0000 UTC m=+1103.503272030" observedRunningTime="2026-03-18 14:18:40.087710865 +0000 UTC m=+1104.216839332" watchObservedRunningTime="2026-03-18 14:18:40.093082129 +0000 UTC m=+1104.222210606" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.397410 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.400163 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.404990 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sdc7v" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.415344 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.416526 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.421499 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.430033 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.452815 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tg9wd"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.454024 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.473820 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551264 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551333 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-nmstate-lock\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551362 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-dbus-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551417 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-ovs-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551574 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lwpw\" (UniqueName: \"kubernetes.io/projected/45ebdaa4-576e-40b7-810d-0f4fc570125d-kube-api-access-7lwpw\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551646 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48sdv\" (UniqueName: \"kubernetes.io/projected/331d152e-70ee-44a9-8bba-7f9696545421-kube-api-access-48sdv\") pod \"nmstate-metrics-9b8c8685d-dgb87\" (UID: \"331d152e-70ee-44a9-8bba-7f9696545421\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.551684 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfm8\" (UniqueName: \"kubernetes.io/projected/3471c66b-ec38-4efc-b1ab-cbf281f8d424-kube-api-access-ljfm8\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.584648 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.586611 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.592553 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-v2nzw" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.592872 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.593027 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.603998 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.653238 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lwpw\" (UniqueName: \"kubernetes.io/projected/45ebdaa4-576e-40b7-810d-0f4fc570125d-kube-api-access-7lwpw\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.653620 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48sdv\" (UniqueName: \"kubernetes.io/projected/331d152e-70ee-44a9-8bba-7f9696545421-kube-api-access-48sdv\") pod \"nmstate-metrics-9b8c8685d-dgb87\" (UID: \"331d152e-70ee-44a9-8bba-7f9696545421\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.654343 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljfm8\" (UniqueName: \"kubernetes.io/projected/3471c66b-ec38-4efc-b1ab-cbf281f8d424-kube-api-access-ljfm8\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.654547 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.654667 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-nmstate-lock\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.654905 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-dbus-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.655251 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-ovs-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.655518 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-ovs-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.656193 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-nmstate-lock\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: E0318 14:18:41.656334 4857 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 18 14:18:41 crc kubenswrapper[4857]: E0318 14:18:41.656509 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair podName:45ebdaa4-576e-40b7-810d-0f4fc570125d nodeName:}" failed. No retries permitted until 2026-03-18 14:18:42.156477089 +0000 UTC m=+1106.285605636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair") pod "nmstate-webhook-5f558f5558-gwqfj" (UID: "45ebdaa4-576e-40b7-810d-0f4fc570125d") : secret "openshift-nmstate-webhook" not found Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.656639 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3471c66b-ec38-4efc-b1ab-cbf281f8d424-dbus-socket\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.680473 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lwpw\" (UniqueName: \"kubernetes.io/projected/45ebdaa4-576e-40b7-810d-0f4fc570125d-kube-api-access-7lwpw\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.686968 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljfm8\" (UniqueName: \"kubernetes.io/projected/3471c66b-ec38-4efc-b1ab-cbf281f8d424-kube-api-access-ljfm8\") pod \"nmstate-handler-tg9wd\" (UID: \"3471c66b-ec38-4efc-b1ab-cbf281f8d424\") " pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.700376 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48sdv\" (UniqueName: \"kubernetes.io/projected/331d152e-70ee-44a9-8bba-7f9696545421-kube-api-access-48sdv\") pod \"nmstate-metrics-9b8c8685d-dgb87\" (UID: \"331d152e-70ee-44a9-8bba-7f9696545421\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.717996 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.757649 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.758021 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtq2q\" (UniqueName: \"kubernetes.io/projected/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-kube-api-access-qtq2q\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.758219 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.909095 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.913703 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.913869 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtq2q\" (UniqueName: \"kubernetes.io/projected/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-kube-api-access-qtq2q\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.914033 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.921025 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.944408 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.957041 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtq2q\" (UniqueName: \"kubernetes.io/projected/d2eb84ee-b26f-4bdf-8887-d14ffea65a41-kube-api-access-qtq2q\") pod \"nmstate-console-plugin-86f58fcf4-gsgsv\" (UID: \"d2eb84ee-b26f-4bdf-8887-d14ffea65a41\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.991703 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:18:41 crc kubenswrapper[4857]: I0318 14:18:41.993408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:41 crc kubenswrapper[4857]: W0318 14:18:41.993718 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3471c66b_ec38_4efc_b1ab_cbf281f8d424.slice/crio-6fd2c7aeea2bdc47f5a2d171cf569ef9efb008531d03471ec8b943ddac0b4851 WatchSource:0}: Error finding container 6fd2c7aeea2bdc47f5a2d171cf569ef9efb008531d03471ec8b943ddac0b4851: Status 404 returned error can't find the container with id 6fd2c7aeea2bdc47f5a2d171cf569ef9efb008531d03471ec8b943ddac0b4851 Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:41.999341 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.030876 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssn98\" (UniqueName: \"kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.030959 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.031051 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.031076 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.031359 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.031431 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.031533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.097478 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tg9wd" event={"ID":"3471c66b-ec38-4efc-b1ab-cbf281f8d424","Type":"ContainerStarted","Data":"6fd2c7aeea2bdc47f5a2d171cf569ef9efb008531d03471ec8b943ddac0b4851"} Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133132 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133219 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133263 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssn98\" (UniqueName: \"kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133285 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133314 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133331 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.133403 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.134327 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.134646 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.135014 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.135056 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.139403 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.139625 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.152973 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssn98\" (UniqueName: \"kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98\") pod \"console-6784499cd7-5vqcz\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.212681 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.235246 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.240036 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/45ebdaa4-576e-40b7-810d-0f4fc570125d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gwqfj\" (UID: \"45ebdaa4-576e-40b7-810d-0f4fc570125d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.344313 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.352408 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.397305 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87"] Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.450042 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv"] Mar 18 14:18:42 crc kubenswrapper[4857]: W0318 14:18:42.472949 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2eb84ee_b26f_4bdf_8887_d14ffea65a41.slice/crio-43248a26a8eed51fbdc5da8aeb73b5802f9bfc4440e6668866bfbb3040b6f0ba WatchSource:0}: Error finding container 43248a26a8eed51fbdc5da8aeb73b5802f9bfc4440e6668866bfbb3040b6f0ba: Status 404 returned error can't find the container with id 43248a26a8eed51fbdc5da8aeb73b5802f9bfc4440e6668866bfbb3040b6f0ba Mar 18 14:18:42 crc kubenswrapper[4857]: I0318 14:18:42.728969 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj"] Mar 18 14:18:43 crc kubenswrapper[4857]: I0318 14:18:43.109295 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:18:43 crc kubenswrapper[4857]: I0318 14:18:43.109527 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" event={"ID":"331d152e-70ee-44a9-8bba-7f9696545421","Type":"ContainerStarted","Data":"4b7b77408b335c2393d258987e6f0c1fb9b7c0f5b566e1730b31a4ad012ca240"} Mar 18 14:18:43 crc kubenswrapper[4857]: W0318 14:18:43.110155 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528a3d75_0557_4ac8_bf75_36590c9929a0.slice/crio-054cd4e75752fc5b4360c9c78026b5da07a868f1294761f9fbe38c58138e1b71 WatchSource:0}: Error finding container 054cd4e75752fc5b4360c9c78026b5da07a868f1294761f9fbe38c58138e1b71: Status 404 returned error can't find the container with id 054cd4e75752fc5b4360c9c78026b5da07a868f1294761f9fbe38c58138e1b71 Mar 18 14:18:43 crc kubenswrapper[4857]: I0318 14:18:43.110745 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" event={"ID":"d2eb84ee-b26f-4bdf-8887-d14ffea65a41","Type":"ContainerStarted","Data":"43248a26a8eed51fbdc5da8aeb73b5802f9bfc4440e6668866bfbb3040b6f0ba"} Mar 18 14:18:43 crc kubenswrapper[4857]: I0318 14:18:43.114172 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" event={"ID":"45ebdaa4-576e-40b7-810d-0f4fc570125d","Type":"ContainerStarted","Data":"afd81fc87b1d1da9a06958361edeaf38da583def80e8d606f5f75d160cdd5743"} Mar 18 14:18:44 crc kubenswrapper[4857]: I0318 14:18:44.130654 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784499cd7-5vqcz" event={"ID":"528a3d75-0557-4ac8-bf75-36590c9929a0","Type":"ContainerStarted","Data":"271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd"} Mar 18 14:18:44 crc kubenswrapper[4857]: I0318 14:18:44.131251 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784499cd7-5vqcz" event={"ID":"528a3d75-0557-4ac8-bf75-36590c9929a0","Type":"ContainerStarted","Data":"054cd4e75752fc5b4360c9c78026b5da07a868f1294761f9fbe38c58138e1b71"} Mar 18 14:18:44 crc kubenswrapper[4857]: I0318 14:18:44.171589 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6784499cd7-5vqcz" podStartSLOduration=3.171561014 podStartE2EDuration="3.171561014s" podCreationTimestamp="2026-03-18 14:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:18:44.15819313 +0000 UTC m=+1108.287321607" watchObservedRunningTime="2026-03-18 14:18:44.171561014 +0000 UTC m=+1108.300689481" Mar 18 14:18:45 crc kubenswrapper[4857]: I0318 14:18:45.965274 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:18:45 crc kubenswrapper[4857]: I0318 14:18:45.967907 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:45 crc kubenswrapper[4857]: I0318 14:18:45.990311 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.139323 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.139411 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj4g\" (UniqueName: \"kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.139940 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.241881 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.242028 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.242056 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnj4g\" (UniqueName: \"kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.242392 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.242704 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.278409 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnj4g\" (UniqueName: \"kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g\") pod \"certified-operators-6lrhb\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:46 crc kubenswrapper[4857]: I0318 14:18:46.296079 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.221979 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.280078 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" event={"ID":"45ebdaa4-576e-40b7-810d-0f4fc570125d","Type":"ContainerStarted","Data":"4b30dacd69a35db54c1e19d21a8e1102e9b80f20d1354f7dc31b2fc67c91dc61"} Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.280288 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.285042 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" event={"ID":"331d152e-70ee-44a9-8bba-7f9696545421","Type":"ContainerStarted","Data":"b31af3d9a5a1b0fb43ad4f6252c7902dafed2ce5c4f3f1602220de5022019d3a"} Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.286615 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerStarted","Data":"c25f263d8166f740721707851eab779014aca90d3d1826787ecc2cfb9bd7621c"} Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.288322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tg9wd" event={"ID":"3471c66b-ec38-4efc-b1ab-cbf281f8d424","Type":"ContainerStarted","Data":"c2419139283697c6d8265715ddbcea7a08c5acbf27a4f1b72806dad875db4829"} Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.288613 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.319655 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tg9wd" podStartSLOduration=1.440561554 podStartE2EDuration="7.319633061s" podCreationTimestamp="2026-03-18 14:18:41 +0000 UTC" firstStartedPulling="2026-03-18 14:18:42.000360997 +0000 UTC m=+1106.129489464" lastFinishedPulling="2026-03-18 14:18:47.879432514 +0000 UTC m=+1112.008560971" observedRunningTime="2026-03-18 14:18:48.31719267 +0000 UTC m=+1112.446321127" watchObservedRunningTime="2026-03-18 14:18:48.319633061 +0000 UTC m=+1112.448761518" Mar 18 14:18:48 crc kubenswrapper[4857]: I0318 14:18:48.321590 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podStartSLOduration=2.208568277 podStartE2EDuration="7.32158166s" podCreationTimestamp="2026-03-18 14:18:41 +0000 UTC" firstStartedPulling="2026-03-18 14:18:42.748935364 +0000 UTC m=+1106.878063831" lastFinishedPulling="2026-03-18 14:18:47.861948757 +0000 UTC m=+1111.991077214" observedRunningTime="2026-03-18 14:18:48.298361539 +0000 UTC m=+1112.427490006" watchObservedRunningTime="2026-03-18 14:18:48.32158166 +0000 UTC m=+1112.450710117" Mar 18 14:18:49 crc kubenswrapper[4857]: I0318 14:18:49.299634 4857 generic.go:334] "Generic (PLEG): container finished" podID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerID="48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53" exitCode=0 Mar 18 14:18:49 crc kubenswrapper[4857]: I0318 14:18:49.299732 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerDied","Data":"48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53"} Mar 18 14:18:50 crc kubenswrapper[4857]: I0318 14:18:50.318585 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerStarted","Data":"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c"} Mar 18 14:18:50 crc kubenswrapper[4857]: I0318 14:18:50.322265 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" event={"ID":"d2eb84ee-b26f-4bdf-8887-d14ffea65a41","Type":"ContainerStarted","Data":"5c2906ec4655461212669d58df0faf51555edf4851828f0dea498f399a79d120"} Mar 18 14:18:50 crc kubenswrapper[4857]: I0318 14:18:50.377904 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-gsgsv" podStartSLOduration=2.589241705 podStartE2EDuration="9.377875845s" podCreationTimestamp="2026-03-18 14:18:41 +0000 UTC" firstStartedPulling="2026-03-18 14:18:42.480973204 +0000 UTC m=+1106.610101661" lastFinishedPulling="2026-03-18 14:18:49.269607344 +0000 UTC m=+1113.398735801" observedRunningTime="2026-03-18 14:18:50.363969537 +0000 UTC m=+1114.493098014" watchObservedRunningTime="2026-03-18 14:18:50.377875845 +0000 UTC m=+1114.507004302" Mar 18 14:18:51 crc kubenswrapper[4857]: I0318 14:18:51.336526 4857 generic.go:334] "Generic (PLEG): container finished" podID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerID="67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c" exitCode=0 Mar 18 14:18:51 crc kubenswrapper[4857]: I0318 14:18:51.336626 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerDied","Data":"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c"} Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.344966 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.345729 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.347998 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" event={"ID":"331d152e-70ee-44a9-8bba-7f9696545421","Type":"ContainerStarted","Data":"567eb1d7a38fbd3c83f0ef8753b382a0994300ffccc110e2557f25eb6e10079f"} Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.352228 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.369581 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dgb87" podStartSLOduration=2.539918212 podStartE2EDuration="11.369557324s" podCreationTimestamp="2026-03-18 14:18:41 +0000 UTC" firstStartedPulling="2026-03-18 14:18:42.401727843 +0000 UTC m=+1106.530856300" lastFinishedPulling="2026-03-18 14:18:51.231366955 +0000 UTC m=+1115.360495412" observedRunningTime="2026-03-18 14:18:52.368467186 +0000 UTC m=+1116.497595653" watchObservedRunningTime="2026-03-18 14:18:52.369557324 +0000 UTC m=+1116.498685791" Mar 18 14:18:52 crc kubenswrapper[4857]: I0318 14:18:52.440143 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6lrhb" podStartSLOduration=4.584334273 podStartE2EDuration="7.440122958s" podCreationTimestamp="2026-03-18 14:18:45 +0000 UTC" firstStartedPulling="2026-03-18 14:18:49.302083666 +0000 UTC m=+1113.431212123" lastFinishedPulling="2026-03-18 14:18:52.157872311 +0000 UTC m=+1116.287000808" observedRunningTime="2026-03-18 14:18:52.436860247 +0000 UTC m=+1116.565988744" watchObservedRunningTime="2026-03-18 14:18:52.440122958 +0000 UTC m=+1116.569251415" Mar 18 14:18:53 crc kubenswrapper[4857]: I0318 14:18:53.363191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerStarted","Data":"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832"} Mar 18 14:18:53 crc kubenswrapper[4857]: I0318 14:18:53.370528 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:18:53 crc kubenswrapper[4857]: I0318 14:18:53.468690 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:18:56 crc kubenswrapper[4857]: I0318 14:18:56.297673 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:56 crc kubenswrapper[4857]: I0318 14:18:56.298360 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:56 crc kubenswrapper[4857]: I0318 14:18:56.377328 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:18:56 crc kubenswrapper[4857]: I0318 14:18:56.956478 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 14:19:02 crc kubenswrapper[4857]: I0318 14:19:02.360124 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 14:19:06 crc kubenswrapper[4857]: I0318 14:19:06.358251 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:19:08 crc kubenswrapper[4857]: I0318 14:19:08.805094 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:19:08 crc kubenswrapper[4857]: I0318 14:19:08.806139 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6lrhb" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="registry-server" containerID="cri-o://18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832" gracePeriod=2 Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.261377 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.346879 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content\") pod \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.347117 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities\") pod \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.347411 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnj4g\" (UniqueName: \"kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g\") pod \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\" (UID: \"fb4d4c13-d707-47fe-9dbd-5b49be0b5638\") " Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.348009 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities" (OuterVolumeSpecName: "utilities") pod "fb4d4c13-d707-47fe-9dbd-5b49be0b5638" (UID: "fb4d4c13-d707-47fe-9dbd-5b49be0b5638"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.373132 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g" (OuterVolumeSpecName: "kube-api-access-hnj4g") pod "fb4d4c13-d707-47fe-9dbd-5b49be0b5638" (UID: "fb4d4c13-d707-47fe-9dbd-5b49be0b5638"). InnerVolumeSpecName "kube-api-access-hnj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.410707 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb4d4c13-d707-47fe-9dbd-5b49be0b5638" (UID: "fb4d4c13-d707-47fe-9dbd-5b49be0b5638"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.450327 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnj4g\" (UniqueName: \"kubernetes.io/projected/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-kube-api-access-hnj4g\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.450390 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.450411 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb4d4c13-d707-47fe-9dbd-5b49be0b5638-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.565718 4857 generic.go:334] "Generic (PLEG): container finished" podID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerID="18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832" exitCode=0 Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.566075 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerDied","Data":"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832"} Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.566283 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6lrhb" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.566416 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6lrhb" event={"ID":"fb4d4c13-d707-47fe-9dbd-5b49be0b5638","Type":"ContainerDied","Data":"c25f263d8166f740721707851eab779014aca90d3d1826787ecc2cfb9bd7621c"} Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.566562 4857 scope.go:117] "RemoveContainer" containerID="18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.614717 4857 scope.go:117] "RemoveContainer" containerID="67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.629531 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.640377 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6lrhb"] Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.650678 4857 scope.go:117] "RemoveContainer" containerID="48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.691693 4857 scope.go:117] "RemoveContainer" containerID="18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832" Mar 18 14:19:09 crc kubenswrapper[4857]: E0318 14:19:09.692394 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832\": container with ID starting with 18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832 not found: ID does not exist" containerID="18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.692459 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832"} err="failed to get container status \"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832\": rpc error: code = NotFound desc = could not find container \"18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832\": container with ID starting with 18e696cd28919f18a5fc37ffbb300dbfc941acd039cc456a78907be42e5ee832 not found: ID does not exist" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.692492 4857 scope.go:117] "RemoveContainer" containerID="67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c" Mar 18 14:19:09 crc kubenswrapper[4857]: E0318 14:19:09.693082 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c\": container with ID starting with 67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c not found: ID does not exist" containerID="67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.693107 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c"} err="failed to get container status \"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c\": rpc error: code = NotFound desc = could not find container \"67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c\": container with ID starting with 67621df7de78dc19e20e7bab974a6ccb39e5d8f9777d11150435ea177bc2341c not found: ID does not exist" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.693120 4857 scope.go:117] "RemoveContainer" containerID="48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53" Mar 18 14:19:09 crc kubenswrapper[4857]: E0318 14:19:09.693945 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53\": container with ID starting with 48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53 not found: ID does not exist" containerID="48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53" Mar 18 14:19:09 crc kubenswrapper[4857]: I0318 14:19:09.694109 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53"} err="failed to get container status \"48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53\": rpc error: code = NotFound desc = could not find container \"48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53\": container with ID starting with 48fd108caa11ec57408e390dd6a0f6976c863b8a759ce2156291a304f80cab53 not found: ID does not exist" Mar 18 14:19:11 crc kubenswrapper[4857]: I0318 14:19:11.185891 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" path="/var/lib/kubelet/pods/fb4d4c13-d707-47fe-9dbd-5b49be0b5638/volumes" Mar 18 14:19:18 crc kubenswrapper[4857]: I0318 14:19:18.562293 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-fc6d67c6b-2tvtn" podUID="ac56357b-0b65-400a-88ee-cde8cbb3194d" containerName="console" containerID="cri-o://a6b8338eef01ad8a9ec5b9697c425b8c57c7fae9edd46b9791ac996c091bbe7b" gracePeriod=15 Mar 18 14:19:18 crc kubenswrapper[4857]: I0318 14:19:18.713878 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fc6d67c6b-2tvtn_ac56357b-0b65-400a-88ee-cde8cbb3194d/console/0.log" Mar 18 14:19:18 crc kubenswrapper[4857]: I0318 14:19:18.714154 4857 generic.go:334] "Generic (PLEG): container finished" podID="ac56357b-0b65-400a-88ee-cde8cbb3194d" containerID="a6b8338eef01ad8a9ec5b9697c425b8c57c7fae9edd46b9791ac996c091bbe7b" exitCode=2 Mar 18 14:19:18 crc kubenswrapper[4857]: I0318 14:19:18.714193 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fc6d67c6b-2tvtn" event={"ID":"ac56357b-0b65-400a-88ee-cde8cbb3194d","Type":"ContainerDied","Data":"a6b8338eef01ad8a9ec5b9697c425b8c57c7fae9edd46b9791ac996c091bbe7b"} Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.019008 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fc6d67c6b-2tvtn_ac56357b-0b65-400a-88ee-cde8cbb3194d/console/0.log" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.019444 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085578 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085628 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085706 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdjbm\" (UniqueName: \"kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085739 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085783 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085849 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.085899 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert\") pod \"ac56357b-0b65-400a-88ee-cde8cbb3194d\" (UID: \"ac56357b-0b65-400a-88ee-cde8cbb3194d\") " Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.087051 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config" (OuterVolumeSpecName: "console-config") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.087282 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca" (OuterVolumeSpecName: "service-ca") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.087776 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.087881 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.096625 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.096794 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm" (OuterVolumeSpecName: "kube-api-access-vdjbm") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "kube-api-access-vdjbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.097115 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ac56357b-0b65-400a-88ee-cde8cbb3194d" (UID: "ac56357b-0b65-400a-88ee-cde8cbb3194d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.192682 4857 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.193442 4857 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.193586 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdjbm\" (UniqueName: \"kubernetes.io/projected/ac56357b-0b65-400a-88ee-cde8cbb3194d-kube-api-access-vdjbm\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.193842 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.194077 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.194191 4857 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ac56357b-0b65-400a-88ee-cde8cbb3194d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.194281 4857 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac56357b-0b65-400a-88ee-cde8cbb3194d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.728399 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fc6d67c6b-2tvtn_ac56357b-0b65-400a-88ee-cde8cbb3194d/console/0.log" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.729015 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fc6d67c6b-2tvtn" event={"ID":"ac56357b-0b65-400a-88ee-cde8cbb3194d","Type":"ContainerDied","Data":"3785d4d45010fc221555799cddff6becd47b48467efd1c170939ed84a452b3c8"} Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.729066 4857 scope.go:117] "RemoveContainer" containerID="a6b8338eef01ad8a9ec5b9697c425b8c57c7fae9edd46b9791ac996c091bbe7b" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.729128 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fc6d67c6b-2tvtn" Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.757048 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:19:19 crc kubenswrapper[4857]: I0318 14:19:19.765392 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-fc6d67c6b-2tvtn"] Mar 18 14:19:21 crc kubenswrapper[4857]: I0318 14:19:21.176257 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac56357b-0b65-400a-88ee-cde8cbb3194d" path="/var/lib/kubelet/pods/ac56357b-0b65-400a-88ee-cde8cbb3194d/volumes" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.052413 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z"] Mar 18 14:19:23 crc kubenswrapper[4857]: E0318 14:19:23.052986 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="registry-server" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053020 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="registry-server" Mar 18 14:19:23 crc kubenswrapper[4857]: E0318 14:19:23.053079 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="extract-utilities" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053087 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="extract-utilities" Mar 18 14:19:23 crc kubenswrapper[4857]: E0318 14:19:23.053143 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="extract-content" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053152 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="extract-content" Mar 18 14:19:23 crc kubenswrapper[4857]: E0318 14:19:23.053175 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac56357b-0b65-400a-88ee-cde8cbb3194d" containerName="console" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053182 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac56357b-0b65-400a-88ee-cde8cbb3194d" containerName="console" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053376 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4d4c13-d707-47fe-9dbd-5b49be0b5638" containerName="registry-server" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.053406 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac56357b-0b65-400a-88ee-cde8cbb3194d" containerName="console" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.054883 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.057778 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.069932 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z"] Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.085366 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.085430 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rn9\" (UniqueName: \"kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.085529 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.187376 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rn9\" (UniqueName: \"kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.187603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.187690 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.188275 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.188411 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.208300 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rn9\" (UniqueName: \"kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.380113 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:23 crc kubenswrapper[4857]: I0318 14:19:23.833219 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z"] Mar 18 14:19:24 crc kubenswrapper[4857]: I0318 14:19:24.774469 4857 generic.go:334] "Generic (PLEG): container finished" podID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerID="22f0c542e855e2b84225623e78c80e05d6818bf9c4a7b72b8b77bf89e2bec149" exitCode=0 Mar 18 14:19:24 crc kubenswrapper[4857]: I0318 14:19:24.774535 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" event={"ID":"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3","Type":"ContainerDied","Data":"22f0c542e855e2b84225623e78c80e05d6818bf9c4a7b72b8b77bf89e2bec149"} Mar 18 14:19:24 crc kubenswrapper[4857]: I0318 14:19:24.774821 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" event={"ID":"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3","Type":"ContainerStarted","Data":"7fe882cda01d41330429226e7e28e424f18b79420e10f310352317ae9648ed19"} Mar 18 14:19:24 crc kubenswrapper[4857]: I0318 14:19:24.776773 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:19:27 crc kubenswrapper[4857]: I0318 14:19:27.805910 4857 generic.go:334] "Generic (PLEG): container finished" podID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerID="32a95fb66d93c4f159b3c6602c909e7efe728bf4eee57b51416b8d7638a23627" exitCode=0 Mar 18 14:19:27 crc kubenswrapper[4857]: I0318 14:19:27.805966 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" event={"ID":"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3","Type":"ContainerDied","Data":"32a95fb66d93c4f159b3c6602c909e7efe728bf4eee57b51416b8d7638a23627"} Mar 18 14:19:28 crc kubenswrapper[4857]: I0318 14:19:28.819369 4857 generic.go:334] "Generic (PLEG): container finished" podID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerID="a4b386462fe7b8e5dd945708b0ab955b7747c89108736a0d9314e0c7c6eee4a0" exitCode=0 Mar 18 14:19:28 crc kubenswrapper[4857]: I0318 14:19:28.819422 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" event={"ID":"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3","Type":"ContainerDied","Data":"a4b386462fe7b8e5dd945708b0ab955b7747c89108736a0d9314e0c7c6eee4a0"} Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.213763 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.330886 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4rn9\" (UniqueName: \"kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9\") pod \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.331014 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util\") pod \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.331102 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle\") pod \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\" (UID: \"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3\") " Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.333496 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle" (OuterVolumeSpecName: "bundle") pod "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" (UID: "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.343692 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util" (OuterVolumeSpecName: "util") pod "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" (UID: "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.349616 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9" (OuterVolumeSpecName: "kube-api-access-r4rn9") pod "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" (UID: "c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3"). InnerVolumeSpecName "kube-api-access-r4rn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.433559 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.433596 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4rn9\" (UniqueName: \"kubernetes.io/projected/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-kube-api-access-r4rn9\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.433607 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.838844 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" event={"ID":"c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3","Type":"ContainerDied","Data":"7fe882cda01d41330429226e7e28e424f18b79420e10f310352317ae9648ed19"} Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.838932 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe882cda01d41330429226e7e28e424f18b79420e10f310352317ae9648ed19" Mar 18 14:19:30 crc kubenswrapper[4857]: I0318 14:19:30.838975 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.415459 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b"] Mar 18 14:19:42 crc kubenswrapper[4857]: E0318 14:19:42.416682 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="pull" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.416704 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="pull" Mar 18 14:19:42 crc kubenswrapper[4857]: E0318 14:19:42.416776 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="extract" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.416787 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="extract" Mar 18 14:19:42 crc kubenswrapper[4857]: E0318 14:19:42.416814 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="util" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.416823 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="util" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.417006 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3" containerName="extract" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.417934 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.421044 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.421248 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.421411 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ljg9t" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.421957 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.436554 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.450847 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b"] Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.554234 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-apiservice-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.554298 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxzd\" (UniqueName: \"kubernetes.io/projected/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-kube-api-access-lhxzd\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.554713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-webhook-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.656043 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-apiservice-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.656098 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxzd\" (UniqueName: \"kubernetes.io/projected/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-kube-api-access-lhxzd\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.656202 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-webhook-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.662127 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-apiservice-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.663429 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-webhook-cert\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.675021 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxzd\" (UniqueName: \"kubernetes.io/projected/18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5-kube-api-access-lhxzd\") pod \"metallb-operator-controller-manager-7889654c4-2jp9b\" (UID: \"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5\") " pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.736192 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.782314 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9"] Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.783900 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.786867 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.786977 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.788471 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n9mcw" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.800886 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9"] Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.966449 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-apiservice-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.966716 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-webhook-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:42 crc kubenswrapper[4857]: I0318 14:19:42.966820 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbqw\" (UniqueName: \"kubernetes.io/projected/7ae3e1fc-2002-4805-bed1-f96339dce3a0-kube-api-access-mnbqw\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.068142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbqw\" (UniqueName: \"kubernetes.io/projected/7ae3e1fc-2002-4805-bed1-f96339dce3a0-kube-api-access-mnbqw\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.068277 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-apiservice-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.068351 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-webhook-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.074508 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-webhook-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.075029 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ae3e1fc-2002-4805-bed1-f96339dce3a0-apiservice-cert\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.090170 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbqw\" (UniqueName: \"kubernetes.io/projected/7ae3e1fc-2002-4805-bed1-f96339dce3a0-kube-api-access-mnbqw\") pod \"metallb-operator-webhook-server-55fbd9db57-wcht9\" (UID: \"7ae3e1fc-2002-4805-bed1-f96339dce3a0\") " pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.134411 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.267112 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b"] Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.704583 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9"] Mar 18 14:19:43 crc kubenswrapper[4857]: W0318 14:19:43.705654 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ae3e1fc_2002_4805_bed1_f96339dce3a0.slice/crio-b6d9395fb15b14f8f3ac68098a1726a1f8a7b6335cd9bda2dda7d09e7cca8f9e WatchSource:0}: Error finding container b6d9395fb15b14f8f3ac68098a1726a1f8a7b6335cd9bda2dda7d09e7cca8f9e: Status 404 returned error can't find the container with id b6d9395fb15b14f8f3ac68098a1726a1f8a7b6335cd9bda2dda7d09e7cca8f9e Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.962914 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" event={"ID":"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5","Type":"ContainerStarted","Data":"95a4f4f5df90be86ee28b268b294dd9f1a760a782b27d9a16a0331e3cef25d19"} Mar 18 14:19:43 crc kubenswrapper[4857]: I0318 14:19:43.964302 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" event={"ID":"7ae3e1fc-2002-4805-bed1-f96339dce3a0","Type":"ContainerStarted","Data":"b6d9395fb15b14f8f3ac68098a1726a1f8a7b6335cd9bda2dda7d09e7cca8f9e"} Mar 18 14:19:49 crc kubenswrapper[4857]: I0318 14:19:49.020944 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" event={"ID":"7ae3e1fc-2002-4805-bed1-f96339dce3a0","Type":"ContainerStarted","Data":"c5c8064d61732ac35eeddc55f4aea20876b1b6a8841232cfb87d4bad557bf558"} Mar 18 14:19:49 crc kubenswrapper[4857]: I0318 14:19:49.022658 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:19:49 crc kubenswrapper[4857]: I0318 14:19:49.045837 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podStartSLOduration=1.9892568659999998 podStartE2EDuration="7.045731485s" podCreationTimestamp="2026-03-18 14:19:42 +0000 UTC" firstStartedPulling="2026-03-18 14:19:43.708933857 +0000 UTC m=+1167.838062314" lastFinishedPulling="2026-03-18 14:19:48.765408466 +0000 UTC m=+1172.894536933" observedRunningTime="2026-03-18 14:19:49.043227203 +0000 UTC m=+1173.172355660" watchObservedRunningTime="2026-03-18 14:19:49.045731485 +0000 UTC m=+1173.174859942" Mar 18 14:19:57 crc kubenswrapper[4857]: I0318 14:19:57.038849 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:19:57 crc kubenswrapper[4857]: I0318 14:19:57.040367 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.147278 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564060-n7vkq"] Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.150235 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.153878 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.154871 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.155141 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.164968 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564060-n7vkq"] Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.275700 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6r96\" (UniqueName: \"kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96\") pod \"auto-csr-approver-29564060-n7vkq\" (UID: \"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f\") " pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.378567 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6r96\" (UniqueName: \"kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96\") pod \"auto-csr-approver-29564060-n7vkq\" (UID: \"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f\") " pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.400735 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6r96\" (UniqueName: \"kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96\") pod \"auto-csr-approver-29564060-n7vkq\" (UID: \"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f\") " pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:00 crc kubenswrapper[4857]: I0318 14:20:00.507637 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:01 crc kubenswrapper[4857]: I0318 14:20:01.043938 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564060-n7vkq"] Mar 18 14:20:01 crc kubenswrapper[4857]: W0318 14:20:01.055437 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c36fcd1_5e20_4a20_a924_2cc0d33e4e5f.slice/crio-51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5 WatchSource:0}: Error finding container 51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5: Status 404 returned error can't find the container with id 51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5 Mar 18 14:20:01 crc kubenswrapper[4857]: I0318 14:20:01.116908 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" event={"ID":"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f","Type":"ContainerStarted","Data":"51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5"} Mar 18 14:20:03 crc kubenswrapper[4857]: I0318 14:20:03.137523 4857 generic.go:334] "Generic (PLEG): container finished" podID="6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" containerID="62192ef9401dcbaa2a8fd786a343c4153aa36fa692f3b6781234c21bb215ecfd" exitCode=0 Mar 18 14:20:03 crc kubenswrapper[4857]: I0318 14:20:03.137781 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" event={"ID":"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f","Type":"ContainerDied","Data":"62192ef9401dcbaa2a8fd786a343c4153aa36fa692f3b6781234c21bb215ecfd"} Mar 18 14:20:03 crc kubenswrapper[4857]: I0318 14:20:03.143199 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.104491 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.184233 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6r96\" (UniqueName: \"kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96\") pod \"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f\" (UID: \"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f\") " Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.211116 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96" (OuterVolumeSpecName: "kube-api-access-v6r96") pod "6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" (UID: "6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f"). InnerVolumeSpecName "kube-api-access-v6r96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.238664 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" event={"ID":"6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f","Type":"ContainerDied","Data":"51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5"} Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.238967 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ca7c216405e58f7b9d57ddad47f6654ea544a2a0f459ac88e3fda3ba5c1ab5" Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.239108 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564060-n7vkq" Mar 18 14:20:05 crc kubenswrapper[4857]: I0318 14:20:05.285770 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6r96\" (UniqueName: \"kubernetes.io/projected/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f-kube-api-access-v6r96\") on node \"crc\" DevicePath \"\"" Mar 18 14:20:06 crc kubenswrapper[4857]: I0318 14:20:06.382344 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564054-nzjjp"] Mar 18 14:20:06 crc kubenswrapper[4857]: I0318 14:20:06.388910 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564054-nzjjp"] Mar 18 14:20:07 crc kubenswrapper[4857]: I0318 14:20:07.179657 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa12b13-20a5-4a32-adf5-6ac63823cce8" path="/var/lib/kubelet/pods/dfa12b13-20a5-4a32-adf5-6ac63823cce8/volumes" Mar 18 14:20:17 crc kubenswrapper[4857]: I0318 14:20:17.481983 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" event={"ID":"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5","Type":"ContainerStarted","Data":"bf95aab027aa704d08f31e729b95d84256ac22571160b79a364ea72ca7f8906a"} Mar 18 14:20:17 crc kubenswrapper[4857]: I0318 14:20:17.482653 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:20:17 crc kubenswrapper[4857]: I0318 14:20:17.517166 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" podStartSLOduration=1.60623107 podStartE2EDuration="35.51713693s" podCreationTimestamp="2026-03-18 14:19:42 +0000 UTC" firstStartedPulling="2026-03-18 14:19:43.323535401 +0000 UTC m=+1167.452663858" lastFinishedPulling="2026-03-18 14:20:17.234441261 +0000 UTC m=+1201.363569718" observedRunningTime="2026-03-18 14:20:17.50674493 +0000 UTC m=+1201.635873427" watchObservedRunningTime="2026-03-18 14:20:17.51713693 +0000 UTC m=+1201.646265407" Mar 18 14:20:27 crc kubenswrapper[4857]: I0318 14:20:27.038857 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:20:27 crc kubenswrapper[4857]: I0318 14:20:27.039564 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:20:29 crc kubenswrapper[4857]: I0318 14:20:29.177279 4857 scope.go:117] "RemoveContainer" containerID="c96c979c26a0bc220c57255506b54c476e6e207d8ca5791652e18ffdf79b2241" Mar 18 14:20:52 crc kubenswrapper[4857]: I0318 14:20:52.740013 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.727769 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xtz2z"] Mar 18 14:20:53 crc kubenswrapper[4857]: E0318 14:20:53.728202 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" containerName="oc" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.728222 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" containerName="oc" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.728465 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" containerName="oc" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.731878 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.737258 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-khkrb" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.737463 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.737618 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.749429 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764"] Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.750740 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.759975 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.761970 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764"] Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.845878 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pm2jd"] Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.847564 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.849860 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.850310 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-g84fk" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.851166 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.851400 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871153 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871233 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metrics-certs\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871338 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82rvf\" (UniqueName: \"kubernetes.io/projected/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-kube-api-access-82rvf\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871362 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4sf\" (UniqueName: \"kubernetes.io/projected/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-kube-api-access-9c4sf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871410 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871673 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics-certs\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871734 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-startup\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.871818 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94z85\" (UniqueName: \"kubernetes.io/projected/75baf138-7643-4b4f-9919-88edd42aee95-kube-api-access-94z85\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.872103 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-reloader\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.872128 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75baf138-7643-4b4f-9919-88edd42aee95-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.872194 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metallb-excludel2\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.872268 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-sockets\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.872286 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-conf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.897337 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-fjhn2"] Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.898598 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.901278 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.917295 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-fjhn2"] Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.973872 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82rvf\" (UniqueName: \"kubernetes.io/projected/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-kube-api-access-82rvf\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.973922 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c4sf\" (UniqueName: \"kubernetes.io/projected/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-kube-api-access-9c4sf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.973979 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974050 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics-certs\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974089 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974135 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-startup\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974169 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94z85\" (UniqueName: \"kubernetes.io/projected/75baf138-7643-4b4f-9919-88edd42aee95-kube-api-access-94z85\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974220 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-reloader\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974237 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75baf138-7643-4b4f-9919-88edd42aee95-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974263 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metallb-excludel2\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: E0318 14:20:53.974282 4857 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 14:20:53 crc kubenswrapper[4857]: E0318 14:20:53.974458 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist podName:a73a34ce-a354-406b-ac7a-68b7f5aaf95b nodeName:}" failed. No retries permitted until 2026-03-18 14:20:54.474372789 +0000 UTC m=+1238.603501246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist") pod "speaker-pm2jd" (UID: "a73a34ce-a354-406b-ac7a-68b7f5aaf95b") : secret "metallb-memberlist" not found Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974292 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-sockets\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974521 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6klk\" (UniqueName: \"kubernetes.io/projected/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-kube-api-access-j6klk\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974567 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-conf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974594 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-cert\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974698 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metrics-certs\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.974775 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-sockets\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.975001 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-reloader\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.975669 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metallb-excludel2\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.976094 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-startup\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.976349 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-frr-conf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.976507 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.986981 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-metrics-certs\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.987074 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75baf138-7643-4b4f-9919-88edd42aee95-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.987119 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-metrics-certs\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:53 crc kubenswrapper[4857]: I0318 14:20:53.998433 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94z85\" (UniqueName: \"kubernetes.io/projected/75baf138-7643-4b4f-9919-88edd42aee95-kube-api-access-94z85\") pod \"frr-k8s-webhook-server-bcc4b6f68-wd764\" (UID: \"75baf138-7643-4b4f-9919-88edd42aee95\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.000446 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c4sf\" (UniqueName: \"kubernetes.io/projected/30a9ec00-16b4-4349-a2c6-a2e6397e0ce0-kube-api-access-9c4sf\") pod \"frr-k8s-xtz2z\" (UID: \"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0\") " pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.001107 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82rvf\" (UniqueName: \"kubernetes.io/projected/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-kube-api-access-82rvf\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.229502 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.230282 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6klk\" (UniqueName: \"kubernetes.io/projected/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-kube-api-access-j6klk\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.230328 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-cert\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.230450 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: E0318 14:20:54.230593 4857 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 18 14:20:54 crc kubenswrapper[4857]: E0318 14:20:54.230660 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs podName:2cbcf5ed-41b1-4596-8e5d-05212018ba3b nodeName:}" failed. No retries permitted until 2026-03-18 14:20:54.730644836 +0000 UTC m=+1238.859773283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs") pod "controller-7bb4cc7c98-fjhn2" (UID: "2cbcf5ed-41b1-4596-8e5d-05212018ba3b") : secret "controller-certs-secret" not found Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.231717 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.250741 4857 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.284843 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-cert\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.310693 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6klk\" (UniqueName: \"kubernetes.io/projected/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-kube-api-access-j6klk\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.543296 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:54 crc kubenswrapper[4857]: E0318 14:20:54.543636 4857 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 14:20:54 crc kubenswrapper[4857]: E0318 14:20:54.543804 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist podName:a73a34ce-a354-406b-ac7a-68b7f5aaf95b nodeName:}" failed. No retries permitted until 2026-03-18 14:20:55.543787496 +0000 UTC m=+1239.672915953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist") pod "speaker-pm2jd" (UID: "a73a34ce-a354-406b-ac7a-68b7f5aaf95b") : secret "metallb-memberlist" not found Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.809982 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.815617 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2cbcf5ed-41b1-4596-8e5d-05212018ba3b-metrics-certs\") pod \"controller-7bb4cc7c98-fjhn2\" (UID: \"2cbcf5ed-41b1-4596-8e5d-05212018ba3b\") " pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:54 crc kubenswrapper[4857]: I0318 14:20:54.853723 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764"] Mar 18 14:20:55 crc kubenswrapper[4857]: I0318 14:20:55.115441 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:55 crc kubenswrapper[4857]: I0318 14:20:55.486027 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" event={"ID":"75baf138-7643-4b4f-9919-88edd42aee95","Type":"ContainerStarted","Data":"59143fc588850324f28a513b770bfc32ecad1acc34a974404ee20ba4696c05e7"} Mar 18 14:20:55 crc kubenswrapper[4857]: I0318 14:20:55.486952 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"3868f9b36904701b111903034731335ceee29f38245f0dc80a07d57d421d42c0"} Mar 18 14:20:55 crc kubenswrapper[4857]: I0318 14:20:55.593427 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:55 crc kubenswrapper[4857]: E0318 14:20:55.593611 4857 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 14:20:55 crc kubenswrapper[4857]: E0318 14:20:55.593943 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist podName:a73a34ce-a354-406b-ac7a-68b7f5aaf95b nodeName:}" failed. No retries permitted until 2026-03-18 14:20:57.593918736 +0000 UTC m=+1241.723047203 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist") pod "speaker-pm2jd" (UID: "a73a34ce-a354-406b-ac7a-68b7f5aaf95b") : secret "metallb-memberlist" not found Mar 18 14:20:55 crc kubenswrapper[4857]: I0318 14:20:55.828642 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-fjhn2"] Mar 18 14:20:56 crc kubenswrapper[4857]: I0318 14:20:56.495645 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fjhn2" event={"ID":"2cbcf5ed-41b1-4596-8e5d-05212018ba3b","Type":"ContainerStarted","Data":"3e4458ddf859075182f4ec2a273ddaf42bcd80b8c0511dce5211ffd9075f2f11"} Mar 18 14:20:56 crc kubenswrapper[4857]: I0318 14:20:56.495970 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fjhn2" event={"ID":"2cbcf5ed-41b1-4596-8e5d-05212018ba3b","Type":"ContainerStarted","Data":"8d17f518407164929e3af5afcbd6ed062671c24c848502c41544ae9e33097549"} Mar 18 14:20:56 crc kubenswrapper[4857]: I0318 14:20:56.495981 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fjhn2" event={"ID":"2cbcf5ed-41b1-4596-8e5d-05212018ba3b","Type":"ContainerStarted","Data":"80f2118f8513e1fa36d9e3da23e74c317a251da6862b80a3b936587bbc6e04c3"} Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.060788 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.061016 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.061068 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.061835 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.061908 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9" gracePeriod=600 Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.641756 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9" exitCode=0 Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.642981 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9"} Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.643031 4857 scope.go:117] "RemoveContainer" containerID="44f0f98140eb2b3e477b163a8e6867008df3fc12c13780bd4524db7e9f4fcf65" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.643108 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.685102 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.713898 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a73a34ce-a354-406b-ac7a-68b7f5aaf95b-memberlist\") pod \"speaker-pm2jd\" (UID: \"a73a34ce-a354-406b-ac7a-68b7f5aaf95b\") " pod="metallb-system/speaker-pm2jd" Mar 18 14:20:57 crc kubenswrapper[4857]: I0318 14:20:57.763094 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm2jd" Mar 18 14:20:58 crc kubenswrapper[4857]: I0318 14:20:58.904375 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37"} Mar 18 14:20:58 crc kubenswrapper[4857]: I0318 14:20:58.909947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm2jd" event={"ID":"a73a34ce-a354-406b-ac7a-68b7f5aaf95b","Type":"ContainerStarted","Data":"58793d32ae6d9b7e8f06111045ecc0a03ab1cb2878b6c954cc81eafecbf05ca5"} Mar 18 14:20:58 crc kubenswrapper[4857]: I0318 14:20:58.948900 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podStartSLOduration=5.948744371 podStartE2EDuration="5.948744371s" podCreationTimestamp="2026-03-18 14:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:20:57.680764804 +0000 UTC m=+1241.809893251" watchObservedRunningTime="2026-03-18 14:20:58.948744371 +0000 UTC m=+1243.077872828" Mar 18 14:20:59 crc kubenswrapper[4857]: I0318 14:20:59.947448 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm2jd" event={"ID":"a73a34ce-a354-406b-ac7a-68b7f5aaf95b","Type":"ContainerStarted","Data":"b2c3b217e7440954722382b9202cf8ef6b2433c9ac7baff10c85817686662f1b"} Mar 18 14:20:59 crc kubenswrapper[4857]: I0318 14:20:59.947736 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm2jd" event={"ID":"a73a34ce-a354-406b-ac7a-68b7f5aaf95b","Type":"ContainerStarted","Data":"c7229c6ed13d1f2a27583acc74d8303cd9e9fb3b7ece36eb15fc68145549d26d"} Mar 18 14:20:59 crc kubenswrapper[4857]: I0318 14:20:59.947792 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pm2jd" Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.061468 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" event={"ID":"75baf138-7643-4b4f-9919-88edd42aee95","Type":"ContainerStarted","Data":"87c2fe843556a160abbcb53f089f2dbdfdf5f59def26965187ed39139d2835cf"} Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.062213 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.065268 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="015c79e83358d5062a37e728efc473fc339247342a801cd44a73789e969f25be" exitCode=0 Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.065357 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"015c79e83358d5062a37e728efc473fc339247342a801cd44a73789e969f25be"} Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.090311 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podStartSLOduration=2.909210047 podStartE2EDuration="14.090288215s" podCreationTimestamp="2026-03-18 14:20:53 +0000 UTC" firstStartedPulling="2026-03-18 14:20:54.857060849 +0000 UTC m=+1238.986189306" lastFinishedPulling="2026-03-18 14:21:06.038139017 +0000 UTC m=+1250.167267474" observedRunningTime="2026-03-18 14:21:07.081686959 +0000 UTC m=+1251.210815476" watchObservedRunningTime="2026-03-18 14:21:07.090288215 +0000 UTC m=+1251.219416672" Mar 18 14:21:07 crc kubenswrapper[4857]: I0318 14:21:07.090403 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pm2jd" podStartSLOduration=14.090398938 podStartE2EDuration="14.090398938s" podCreationTimestamp="2026-03-18 14:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:20:59.98338278 +0000 UTC m=+1244.112511237" watchObservedRunningTime="2026-03-18 14:21:07.090398938 +0000 UTC m=+1251.219527395" Mar 18 14:21:08 crc kubenswrapper[4857]: I0318 14:21:08.078633 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="0d1f67f34f8dceca091d95ad71d0f64e2398ec8a8a7799279a4f28573c25d769" exitCode=0 Mar 18 14:21:08 crc kubenswrapper[4857]: I0318 14:21:08.078800 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"0d1f67f34f8dceca091d95ad71d0f64e2398ec8a8a7799279a4f28573c25d769"} Mar 18 14:21:09 crc kubenswrapper[4857]: I0318 14:21:09.088609 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="8eb885806c6c667b1ca8378e249634b4dfe3e39cfb63c1b4da9ad5f85a687b7a" exitCode=0 Mar 18 14:21:09 crc kubenswrapper[4857]: I0318 14:21:09.088666 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"8eb885806c6c667b1ca8378e249634b4dfe3e39cfb63c1b4da9ad5f85a687b7a"} Mar 18 14:21:10 crc kubenswrapper[4857]: I0318 14:21:10.104619 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"3618f0b90de1e22d422adf03009ca1dbe4e711f39e937d9f6fb10b4dff8a3ed9"} Mar 18 14:21:10 crc kubenswrapper[4857]: I0318 14:21:10.105080 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349"} Mar 18 14:21:10 crc kubenswrapper[4857]: I0318 14:21:10.105096 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2"} Mar 18 14:21:11 crc kubenswrapper[4857]: I0318 14:21:11.258969 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"5c1859821fc51f298fe522c31f718f3991de2487280f4efbe360ae3007e276a5"} Mar 18 14:21:11 crc kubenswrapper[4857]: I0318 14:21:11.259314 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"93ba89632d2a33073a715208b559d17c84ed82e718ea559b8acf54d861b2cbe7"} Mar 18 14:21:12 crc kubenswrapper[4857]: I0318 14:21:12.278024 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"acaca431e9471bc36c47a4fd0a30692f76ac2f07e9499d7f3f4e1613ec09a653"} Mar 18 14:21:12 crc kubenswrapper[4857]: I0318 14:21:12.278304 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:21:12 crc kubenswrapper[4857]: I0318 14:21:12.317799 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xtz2z" podStartSLOduration=7.850034203 podStartE2EDuration="19.317774758s" podCreationTimestamp="2026-03-18 14:20:53 +0000 UTC" firstStartedPulling="2026-03-18 14:20:54.606278742 +0000 UTC m=+1238.735407199" lastFinishedPulling="2026-03-18 14:21:06.074019267 +0000 UTC m=+1250.203147754" observedRunningTime="2026-03-18 14:21:12.315153235 +0000 UTC m=+1256.444281692" watchObservedRunningTime="2026-03-18 14:21:12.317774758 +0000 UTC m=+1256.446903225" Mar 18 14:21:14 crc kubenswrapper[4857]: I0318 14:21:14.232804 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:21:14 crc kubenswrapper[4857]: I0318 14:21:14.289323 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:21:15 crc kubenswrapper[4857]: I0318 14:21:15.122747 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-fjhn2" Mar 18 14:21:17 crc kubenswrapper[4857]: I0318 14:21:17.771629 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pm2jd" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.236571 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xtz2z" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.252741 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.393485 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8cxcs"] Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.395720 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.399691 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.400015 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.402632 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nn75j" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.404651 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8cxcs"] Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.547365 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p84p4\" (UniqueName: \"kubernetes.io/projected/bd585d57-f586-4b7b-8c56-be04591b6bdd-kube-api-access-p84p4\") pod \"openstack-operator-index-8cxcs\" (UID: \"bd585d57-f586-4b7b-8c56-be04591b6bdd\") " pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.649470 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p84p4\" (UniqueName: \"kubernetes.io/projected/bd585d57-f586-4b7b-8c56-be04591b6bdd-kube-api-access-p84p4\") pod \"openstack-operator-index-8cxcs\" (UID: \"bd585d57-f586-4b7b-8c56-be04591b6bdd\") " pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.679187 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p84p4\" (UniqueName: \"kubernetes.io/projected/bd585d57-f586-4b7b-8c56-be04591b6bdd-kube-api-access-p84p4\") pod \"openstack-operator-index-8cxcs\" (UID: \"bd585d57-f586-4b7b-8c56-be04591b6bdd\") " pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:24 crc kubenswrapper[4857]: I0318 14:21:24.720377 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:25 crc kubenswrapper[4857]: I0318 14:21:25.319731 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8cxcs"] Mar 18 14:21:25 crc kubenswrapper[4857]: I0318 14:21:25.418205 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8cxcs" event={"ID":"bd585d57-f586-4b7b-8c56-be04591b6bdd","Type":"ContainerStarted","Data":"ecc768d5f0ec1c46c3e3e744e0f83afe4f0c0b65995a7790ea25589d4e1c2803"} Mar 18 14:21:28 crc kubenswrapper[4857]: I0318 14:21:28.495231 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8cxcs" podStartSLOduration=1.578852871 podStartE2EDuration="4.495193086s" podCreationTimestamp="2026-03-18 14:21:24 +0000 UTC" firstStartedPulling="2026-03-18 14:21:25.333998108 +0000 UTC m=+1269.463126585" lastFinishedPulling="2026-03-18 14:21:28.250338343 +0000 UTC m=+1272.379466800" observedRunningTime="2026-03-18 14:21:28.47867793 +0000 UTC m=+1272.607806457" watchObservedRunningTime="2026-03-18 14:21:28.495193086 +0000 UTC m=+1272.624321583" Mar 18 14:21:29 crc kubenswrapper[4857]: I0318 14:21:29.469166 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8cxcs" event={"ID":"bd585d57-f586-4b7b-8c56-be04591b6bdd","Type":"ContainerStarted","Data":"091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762"} Mar 18 14:21:34 crc kubenswrapper[4857]: I0318 14:21:34.720753 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:34 crc kubenswrapper[4857]: I0318 14:21:34.723485 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:34 crc kubenswrapper[4857]: I0318 14:21:34.774394 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:35 crc kubenswrapper[4857]: I0318 14:21:35.666741 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 14:21:38 crc kubenswrapper[4857]: I0318 14:21:38.862574 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9"] Mar 18 14:21:38 crc kubenswrapper[4857]: I0318 14:21:38.866970 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:38 crc kubenswrapper[4857]: I0318 14:21:38.870356 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-26vv9" Mar 18 14:21:38 crc kubenswrapper[4857]: I0318 14:21:38.874136 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9"] Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.059435 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.059588 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.059641 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h78z\" (UniqueName: \"kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.161495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.161754 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.161914 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h78z\" (UniqueName: \"kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.162624 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.165118 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.279006 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h78z\" (UniqueName: \"kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z\") pod \"484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:39 crc kubenswrapper[4857]: I0318 14:21:39.505865 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:40 crc kubenswrapper[4857]: I0318 14:21:40.079499 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9"] Mar 18 14:21:40 crc kubenswrapper[4857]: I0318 14:21:40.686280 4857 generic.go:334] "Generic (PLEG): container finished" podID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerID="2f50dcb14482a40b5d81b0bcebef836e309fc55bc61300f3b11110c3fd52cbb9" exitCode=0 Mar 18 14:21:40 crc kubenswrapper[4857]: I0318 14:21:40.686410 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" event={"ID":"92987a54-b377-41c3-8c50-bc86e82f41c0","Type":"ContainerDied","Data":"2f50dcb14482a40b5d81b0bcebef836e309fc55bc61300f3b11110c3fd52cbb9"} Mar 18 14:21:40 crc kubenswrapper[4857]: I0318 14:21:40.686583 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" event={"ID":"92987a54-b377-41c3-8c50-bc86e82f41c0","Type":"ContainerStarted","Data":"600ffd708762ed31444e027e2a0a6f5be9351a17d01394dbbad7e8f88dc32e28"} Mar 18 14:21:41 crc kubenswrapper[4857]: I0318 14:21:41.696587 4857 generic.go:334] "Generic (PLEG): container finished" podID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerID="dbcfdf14c4eafd6cb47db2fcc6a1e9b48373f2c444850996970f9b61074d63fb" exitCode=0 Mar 18 14:21:41 crc kubenswrapper[4857]: I0318 14:21:41.696702 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" event={"ID":"92987a54-b377-41c3-8c50-bc86e82f41c0","Type":"ContainerDied","Data":"dbcfdf14c4eafd6cb47db2fcc6a1e9b48373f2c444850996970f9b61074d63fb"} Mar 18 14:21:42 crc kubenswrapper[4857]: I0318 14:21:42.710027 4857 generic.go:334] "Generic (PLEG): container finished" podID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerID="4c37036554af34ed4c211a92156e56fce0c6d20d8febab8cf8340d38c008cd10" exitCode=0 Mar 18 14:21:42 crc kubenswrapper[4857]: I0318 14:21:42.710119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" event={"ID":"92987a54-b377-41c3-8c50-bc86e82f41c0","Type":"ContainerDied","Data":"4c37036554af34ed4c211a92156e56fce0c6d20d8febab8cf8340d38c008cd10"} Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.191219 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.376118 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h78z\" (UniqueName: \"kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z\") pod \"92987a54-b377-41c3-8c50-bc86e82f41c0\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.377595 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle\") pod \"92987a54-b377-41c3-8c50-bc86e82f41c0\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.377711 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util\") pod \"92987a54-b377-41c3-8c50-bc86e82f41c0\" (UID: \"92987a54-b377-41c3-8c50-bc86e82f41c0\") " Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.379155 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle" (OuterVolumeSpecName: "bundle") pod "92987a54-b377-41c3-8c50-bc86e82f41c0" (UID: "92987a54-b377-41c3-8c50-bc86e82f41c0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.383118 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z" (OuterVolumeSpecName: "kube-api-access-2h78z") pod "92987a54-b377-41c3-8c50-bc86e82f41c0" (UID: "92987a54-b377-41c3-8c50-bc86e82f41c0"). InnerVolumeSpecName "kube-api-access-2h78z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.397567 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util" (OuterVolumeSpecName: "util") pod "92987a54-b377-41c3-8c50-bc86e82f41c0" (UID: "92987a54-b377-41c3-8c50-bc86e82f41c0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.479525 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h78z\" (UniqueName: \"kubernetes.io/projected/92987a54-b377-41c3-8c50-bc86e82f41c0-kube-api-access-2h78z\") on node \"crc\" DevicePath \"\"" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.479568 4857 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.479581 4857 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92987a54-b377-41c3-8c50-bc86e82f41c0-util\") on node \"crc\" DevicePath \"\"" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.740995 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" event={"ID":"92987a54-b377-41c3-8c50-bc86e82f41c0","Type":"ContainerDied","Data":"600ffd708762ed31444e027e2a0a6f5be9351a17d01394dbbad7e8f88dc32e28"} Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.741114 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600ffd708762ed31444e027e2a0a6f5be9351a17d01394dbbad7e8f88dc32e28" Mar 18 14:21:44 crc kubenswrapper[4857]: I0318 14:21:44.741348 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.967320 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t"] Mar 18 14:21:51 crc kubenswrapper[4857]: E0318 14:21:51.968875 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="extract" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.968917 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="extract" Mar 18 14:21:51 crc kubenswrapper[4857]: E0318 14:21:51.968959 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="util" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.968970 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="util" Mar 18 14:21:51 crc kubenswrapper[4857]: E0318 14:21:51.968980 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="pull" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.968992 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="pull" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.969232 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="92987a54-b377-41c3-8c50-bc86e82f41c0" containerName="extract" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.970041 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:51 crc kubenswrapper[4857]: I0318 14:21:51.973393 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jmlv7" Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.000299 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t"] Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.007073 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ld55\" (UniqueName: \"kubernetes.io/projected/fdc9df02-49d3-4a40-ba9c-d6ef085abb04-kube-api-access-9ld55\") pod \"openstack-operator-controller-init-5847fcc4fb-mg28t\" (UID: \"fdc9df02-49d3-4a40-ba9c-d6ef085abb04\") " pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.108711 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ld55\" (UniqueName: \"kubernetes.io/projected/fdc9df02-49d3-4a40-ba9c-d6ef085abb04-kube-api-access-9ld55\") pod \"openstack-operator-controller-init-5847fcc4fb-mg28t\" (UID: \"fdc9df02-49d3-4a40-ba9c-d6ef085abb04\") " pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.139945 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ld55\" (UniqueName: \"kubernetes.io/projected/fdc9df02-49d3-4a40-ba9c-d6ef085abb04-kube-api-access-9ld55\") pod \"openstack-operator-controller-init-5847fcc4fb-mg28t\" (UID: \"fdc9df02-49d3-4a40-ba9c-d6ef085abb04\") " pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.290094 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:52 crc kubenswrapper[4857]: I0318 14:21:52.691220 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t"] Mar 18 14:21:53 crc kubenswrapper[4857]: I0318 14:21:53.144069 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" event={"ID":"fdc9df02-49d3-4a40-ba9c-d6ef085abb04","Type":"ContainerStarted","Data":"2a4d59bf9a2632c3ff06f7700b9d1cdc70199d13640ca05111441ae943dbda99"} Mar 18 14:21:59 crc kubenswrapper[4857]: I0318 14:21:59.197850 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" event={"ID":"fdc9df02-49d3-4a40-ba9c-d6ef085abb04","Type":"ContainerStarted","Data":"bee0c79f6d5dfa80a3f7716eee333ebaabbd1f86cbf3e251968dd65a623d6623"} Mar 18 14:21:59 crc kubenswrapper[4857]: I0318 14:21:59.198643 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:21:59 crc kubenswrapper[4857]: I0318 14:21:59.238938 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podStartSLOduration=2.229658618 podStartE2EDuration="8.238878354s" podCreationTimestamp="2026-03-18 14:21:51 +0000 UTC" firstStartedPulling="2026-03-18 14:21:52.696562471 +0000 UTC m=+1296.825690928" lastFinishedPulling="2026-03-18 14:21:58.705782207 +0000 UTC m=+1302.834910664" observedRunningTime="2026-03-18 14:21:59.236650241 +0000 UTC m=+1303.365778708" watchObservedRunningTime="2026-03-18 14:21:59.238878354 +0000 UTC m=+1303.368006831" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.152298 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564062-jlt7p"] Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.154153 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.157043 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.157343 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.158965 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.176686 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564062-jlt7p"] Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.318462 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wx4\" (UniqueName: \"kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4\") pod \"auto-csr-approver-29564062-jlt7p\" (UID: \"a5427968-6f77-45c6-9401-fec9f5409905\") " pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.421522 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wx4\" (UniqueName: \"kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4\") pod \"auto-csr-approver-29564062-jlt7p\" (UID: \"a5427968-6f77-45c6-9401-fec9f5409905\") " pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.446381 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wx4\" (UniqueName: \"kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4\") pod \"auto-csr-approver-29564062-jlt7p\" (UID: \"a5427968-6f77-45c6-9401-fec9f5409905\") " pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:00 crc kubenswrapper[4857]: I0318 14:22:00.474806 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:01 crc kubenswrapper[4857]: I0318 14:22:01.004999 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564062-jlt7p"] Mar 18 14:22:01 crc kubenswrapper[4857]: W0318 14:22:01.013779 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5427968_6f77_45c6_9401_fec9f5409905.slice/crio-3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88 WatchSource:0}: Error finding container 3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88: Status 404 returned error can't find the container with id 3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88 Mar 18 14:22:01 crc kubenswrapper[4857]: I0318 14:22:01.220726 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" event={"ID":"a5427968-6f77-45c6-9401-fec9f5409905","Type":"ContainerStarted","Data":"3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88"} Mar 18 14:22:03 crc kubenswrapper[4857]: I0318 14:22:03.244491 4857 generic.go:334] "Generic (PLEG): container finished" podID="a5427968-6f77-45c6-9401-fec9f5409905" containerID="95efc1e2da5135633bc35c5e3608d314bbca7554ba168575d360a4f598d51b5a" exitCode=0 Mar 18 14:22:03 crc kubenswrapper[4857]: I0318 14:22:03.244591 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" event={"ID":"a5427968-6f77-45c6-9401-fec9f5409905","Type":"ContainerDied","Data":"95efc1e2da5135633bc35c5e3608d314bbca7554ba168575d360a4f598d51b5a"} Mar 18 14:22:04 crc kubenswrapper[4857]: I0318 14:22:04.671995 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:04 crc kubenswrapper[4857]: I0318 14:22:04.803566 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9wx4\" (UniqueName: \"kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4\") pod \"a5427968-6f77-45c6-9401-fec9f5409905\" (UID: \"a5427968-6f77-45c6-9401-fec9f5409905\") " Mar 18 14:22:04 crc kubenswrapper[4857]: I0318 14:22:04.812817 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4" (OuterVolumeSpecName: "kube-api-access-s9wx4") pod "a5427968-6f77-45c6-9401-fec9f5409905" (UID: "a5427968-6f77-45c6-9401-fec9f5409905"). InnerVolumeSpecName "kube-api-access-s9wx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:22:04 crc kubenswrapper[4857]: I0318 14:22:04.906145 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9wx4\" (UniqueName: \"kubernetes.io/projected/a5427968-6f77-45c6-9401-fec9f5409905-kube-api-access-s9wx4\") on node \"crc\" DevicePath \"\"" Mar 18 14:22:05 crc kubenswrapper[4857]: I0318 14:22:05.267681 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" event={"ID":"a5427968-6f77-45c6-9401-fec9f5409905","Type":"ContainerDied","Data":"3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88"} Mar 18 14:22:05 crc kubenswrapper[4857]: I0318 14:22:05.267772 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fac16a1aa84efd3fb0680d242a3fd3176280a31f78f8f62043c27b8ed615a88" Mar 18 14:22:05 crc kubenswrapper[4857]: I0318 14:22:05.267915 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564062-jlt7p" Mar 18 14:22:05 crc kubenswrapper[4857]: I0318 14:22:05.749208 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564056-fsd44"] Mar 18 14:22:05 crc kubenswrapper[4857]: I0318 14:22:05.758116 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564056-fsd44"] Mar 18 14:22:07 crc kubenswrapper[4857]: I0318 14:22:07.188252 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351c3f0d-5c89-4db9-bb08-5e4853d56d69" path="/var/lib/kubelet/pods/351c3f0d-5c89-4db9-bb08-5e4853d56d69/volumes" Mar 18 14:22:12 crc kubenswrapper[4857]: I0318 14:22:12.293882 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 14:22:29 crc kubenswrapper[4857]: I0318 14:22:29.302109 4857 scope.go:117] "RemoveContainer" containerID="28a8ce284a6e2d853c48ff4e2861d50443387eb7b17f6b4a6d65f5365c1c57ad" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.706946 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr"] Mar 18 14:22:32 crc kubenswrapper[4857]: E0318 14:22:32.708077 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5427968-6f77-45c6-9401-fec9f5409905" containerName="oc" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.708118 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5427968-6f77-45c6-9401-fec9f5409905" containerName="oc" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.708611 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5427968-6f77-45c6-9401-fec9f5409905" containerName="oc" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.709839 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.715652 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.716726 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pp2wz" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.757619 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.761864 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c8fjv" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.781827 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.811777 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.816450 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh8rw\" (UniqueName: \"kubernetes.io/projected/b876d788-10af-45fb-95e6-37e7e127249f-kube-api-access-kh8rw\") pod \"barbican-operator-controller-manager-59bc569d95-smknr\" (UID: \"b876d788-10af-45fb-95e6-37e7e127249f\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.825065 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.826635 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.829267 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xvpbb" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.857826 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.859030 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.865618 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mcbjx" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.868028 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.877904 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.907680 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.909474 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.917999 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.918115 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh8rw\" (UniqueName: \"kubernetes.io/projected/b876d788-10af-45fb-95e6-37e7e127249f-kube-api-access-kh8rw\") pod \"barbican-operator-controller-manager-59bc569d95-smknr\" (UID: \"b876d788-10af-45fb-95e6-37e7e127249f\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.918229 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7522m\" (UniqueName: \"kubernetes.io/projected/8ffb9263-05b9-447d-a332-31f5f3312ea9-kube-api-access-7522m\") pod \"designate-operator-controller-manager-588d4d986b-ptv8b\" (UID: \"8ffb9263-05b9-447d-a332-31f5f3312ea9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.918350 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt96k\" (UniqueName: \"kubernetes.io/projected/73a9b06c-5f5c-46f7-9548-28c5a9513a95-kube-api-access-jt96k\") pod \"cinder-operator-controller-manager-8d58dc466-ltg7d\" (UID: \"73a9b06c-5f5c-46f7-9548-28c5a9513a95\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.918671 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-zv4n9" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.919135 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.926073 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5zxnq" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.926280 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.959743 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.973235 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh8rw\" (UniqueName: \"kubernetes.io/projected/b876d788-10af-45fb-95e6-37e7e127249f-kube-api-access-kh8rw\") pod \"barbican-operator-controller-manager-59bc569d95-smknr\" (UID: \"b876d788-10af-45fb-95e6-37e7e127249f\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.993928 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv"] Mar 18 14:22:32 crc kubenswrapper[4857]: I0318 14:22:32.995520 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.002823 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.008835 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5tkm7" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.019990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d4p4\" (UniqueName: \"kubernetes.io/projected/e160f13b-785a-46a2-adb4-fa92ce7c6ab7-kube-api-access-9d4p4\") pod \"glance-operator-controller-manager-79df6bcc97-dmrdv\" (UID: \"e160f13b-785a-46a2-adb4-fa92ce7c6ab7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.020134 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7522m\" (UniqueName: \"kubernetes.io/projected/8ffb9263-05b9-447d-a332-31f5f3312ea9-kube-api-access-7522m\") pod \"designate-operator-controller-manager-588d4d986b-ptv8b\" (UID: \"8ffb9263-05b9-447d-a332-31f5f3312ea9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.020187 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhq9\" (UniqueName: \"kubernetes.io/projected/cffafd39-a112-46ab-becf-ad58facd5712-kube-api-access-drhq9\") pod \"heat-operator-controller-manager-67dd5f86f5-fvz4f\" (UID: \"cffafd39-a112-46ab-becf-ad58facd5712\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.020255 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt96k\" (UniqueName: \"kubernetes.io/projected/73a9b06c-5f5c-46f7-9548-28c5a9513a95-kube-api-access-jt96k\") pod \"cinder-operator-controller-manager-8d58dc466-ltg7d\" (UID: \"73a9b06c-5f5c-46f7-9548-28c5a9513a95\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.020296 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2h8\" (UniqueName: \"kubernetes.io/projected/01c6ffec-b474-4bfb-a282-484214bea129-kube-api-access-wn2h8\") pod \"horizon-operator-controller-manager-8464cc45fb-fqnq2\" (UID: \"01c6ffec-b474-4bfb-a282-484214bea129\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.061670 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.062807 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.064455 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt96k\" (UniqueName: \"kubernetes.io/projected/73a9b06c-5f5c-46f7-9548-28c5a9513a95-kube-api-access-jt96k\") pod \"cinder-operator-controller-manager-8d58dc466-ltg7d\" (UID: \"73a9b06c-5f5c-46f7-9548-28c5a9513a95\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.066811 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ljj2w" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.068841 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7522m\" (UniqueName: \"kubernetes.io/projected/8ffb9263-05b9-447d-a332-31f5f3312ea9-kube-api-access-7522m\") pod \"designate-operator-controller-manager-588d4d986b-ptv8b\" (UID: \"8ffb9263-05b9-447d-a332-31f5f3312ea9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.086234 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.086657 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.090991 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.105428 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.106856 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.122139 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qpdpx" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.122861 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2h8\" (UniqueName: \"kubernetes.io/projected/01c6ffec-b474-4bfb-a282-484214bea129-kube-api-access-wn2h8\") pod \"horizon-operator-controller-manager-8464cc45fb-fqnq2\" (UID: \"01c6ffec-b474-4bfb-a282-484214bea129\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.122923 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.123005 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d4p4\" (UniqueName: \"kubernetes.io/projected/e160f13b-785a-46a2-adb4-fa92ce7c6ab7-kube-api-access-9d4p4\") pod \"glance-operator-controller-manager-79df6bcc97-dmrdv\" (UID: \"e160f13b-785a-46a2-adb4-fa92ce7c6ab7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.123108 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhq9\" (UniqueName: \"kubernetes.io/projected/cffafd39-a112-46ab-becf-ad58facd5712-kube-api-access-drhq9\") pod \"heat-operator-controller-manager-67dd5f86f5-fvz4f\" (UID: \"cffafd39-a112-46ab-becf-ad58facd5712\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.123140 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjmr7\" (UniqueName: \"kubernetes.io/projected/2fc1a575-873e-43b1-9707-bc6247ec8bbc-kube-api-access-fjmr7\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.128787 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.154322 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.160717 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.175195 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhq9\" (UniqueName: \"kubernetes.io/projected/cffafd39-a112-46ab-becf-ad58facd5712-kube-api-access-drhq9\") pod \"heat-operator-controller-manager-67dd5f86f5-fvz4f\" (UID: \"cffafd39-a112-46ab-becf-ad58facd5712\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.190904 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.194074 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d4p4\" (UniqueName: \"kubernetes.io/projected/e160f13b-785a-46a2-adb4-fa92ce7c6ab7-kube-api-access-9d4p4\") pod \"glance-operator-controller-manager-79df6bcc97-dmrdv\" (UID: \"e160f13b-785a-46a2-adb4-fa92ce7c6ab7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.194606 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.195085 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.202703 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8hnqx" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.205456 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2h8\" (UniqueName: \"kubernetes.io/projected/01c6ffec-b474-4bfb-a282-484214bea129-kube-api-access-wn2h8\") pod \"horizon-operator-controller-manager-8464cc45fb-fqnq2\" (UID: \"01c6ffec-b474-4bfb-a282-484214bea129\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.225270 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcpw\" (UniqueName: \"kubernetes.io/projected/d567742c-e8c4-4c28-9aae-afb3527cd915-kube-api-access-8lcpw\") pod \"ironic-operator-controller-manager-6f787dddc9-kddxh\" (UID: \"d567742c-e8c4-4c28-9aae-afb3527cd915\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.225362 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjmr7\" (UniqueName: \"kubernetes.io/projected/2fc1a575-873e-43b1-9707-bc6247ec8bbc-kube-api-access-fjmr7\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.225415 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.225448 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8znz2\" (UniqueName: \"kubernetes.io/projected/56663366-8771-43d4-b5df-ef9b84b90a74-kube-api-access-8znz2\") pod \"keystone-operator-controller-manager-768b96df4c-xnh2t\" (UID: \"56663366-8771-43d4-b5df-ef9b84b90a74\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.227903 4857 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.228025 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert podName:2fc1a575-873e-43b1-9707-bc6247ec8bbc nodeName:}" failed. No retries permitted until 2026-03-18 14:22:33.727978337 +0000 UTC m=+1337.857106794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert") pod "infra-operator-controller-manager-7b9c774f96-xjwdv" (UID: "2fc1a575-873e-43b1-9707-bc6247ec8bbc") : secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.232801 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.245383 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.247736 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.249036 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.256022 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bzwx4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.271077 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.321735 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.331172 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.332414 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.335568 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56djv\" (UniqueName: \"kubernetes.io/projected/633285e4-04be-48d6-a496-642aa673be88-kube-api-access-56djv\") pod \"manila-operator-controller-manager-55f864c847-9m5mv\" (UID: \"633285e4-04be-48d6-a496-642aa673be88\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.335649 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8znz2\" (UniqueName: \"kubernetes.io/projected/56663366-8771-43d4-b5df-ef9b84b90a74-kube-api-access-8znz2\") pod \"keystone-operator-controller-manager-768b96df4c-xnh2t\" (UID: \"56663366-8771-43d4-b5df-ef9b84b90a74\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.335857 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5hx\" (UniqueName: \"kubernetes.io/projected/f86c8f25-0e6c-4911-87f8-7ff89a25a040-kube-api-access-mx5hx\") pod \"mariadb-operator-controller-manager-67ccfc9778-l4h6z\" (UID: \"f86c8f25-0e6c-4911-87f8-7ff89a25a040\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.335943 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lcpw\" (UniqueName: \"kubernetes.io/projected/d567742c-e8c4-4c28-9aae-afb3527cd915-kube-api-access-8lcpw\") pod \"ironic-operator-controller-manager-6f787dddc9-kddxh\" (UID: \"d567742c-e8c4-4c28-9aae-afb3527cd915\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.348068 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjmr7\" (UniqueName: \"kubernetes.io/projected/2fc1a575-873e-43b1-9707-bc6247ec8bbc-kube-api-access-fjmr7\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.365297 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p5lzk" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.366459 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.371062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.376992 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.459802 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpfl\" (UniqueName: \"kubernetes.io/projected/2d1893e2-6251-42ef-82d7-529e1f27ec4c-kube-api-access-4vpfl\") pod \"neutron-operator-controller-manager-767865f676-v6rv8\" (UID: \"2d1893e2-6251-42ef-82d7-529e1f27ec4c\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.460283 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56djv\" (UniqueName: \"kubernetes.io/projected/633285e4-04be-48d6-a496-642aa673be88-kube-api-access-56djv\") pod \"manila-operator-controller-manager-55f864c847-9m5mv\" (UID: \"633285e4-04be-48d6-a496-642aa673be88\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.460434 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx5hx\" (UniqueName: \"kubernetes.io/projected/f86c8f25-0e6c-4911-87f8-7ff89a25a040-kube-api-access-mx5hx\") pod \"mariadb-operator-controller-manager-67ccfc9778-l4h6z\" (UID: \"f86c8f25-0e6c-4911-87f8-7ff89a25a040\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.493998 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.520838 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9m5tk" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.538436 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.548064 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.552435 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-h5wkp" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.562894 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpfl\" (UniqueName: \"kubernetes.io/projected/2d1893e2-6251-42ef-82d7-529e1f27ec4c-kube-api-access-4vpfl\") pod \"neutron-operator-controller-manager-767865f676-v6rv8\" (UID: \"2d1893e2-6251-42ef-82d7-529e1f27ec4c\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.563106 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfb7g\" (UniqueName: \"kubernetes.io/projected/7f57203c-7aa8-4db7-a1f1-973a59e8fb9e-kube-api-access-qfb7g\") pod \"nova-operator-controller-manager-5d488d59fb-8glm4\" (UID: \"7f57203c-7aa8-4db7-a1f1-973a59e8fb9e\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.609514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lcpw\" (UniqueName: \"kubernetes.io/projected/d567742c-e8c4-4c28-9aae-afb3527cd915-kube-api-access-8lcpw\") pod \"ironic-operator-controller-manager-6f787dddc9-kddxh\" (UID: \"d567742c-e8c4-4c28-9aae-afb3527cd915\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.609700 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8znz2\" (UniqueName: \"kubernetes.io/projected/56663366-8771-43d4-b5df-ef9b84b90a74-kube-api-access-8znz2\") pod \"keystone-operator-controller-manager-768b96df4c-xnh2t\" (UID: \"56663366-8771-43d4-b5df-ef9b84b90a74\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.613423 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.628399 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56djv\" (UniqueName: \"kubernetes.io/projected/633285e4-04be-48d6-a496-642aa673be88-kube-api-access-56djv\") pod \"manila-operator-controller-manager-55f864c847-9m5mv\" (UID: \"633285e4-04be-48d6-a496-642aa673be88\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.633910 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpfl\" (UniqueName: \"kubernetes.io/projected/2d1893e2-6251-42ef-82d7-529e1f27ec4c-kube-api-access-4vpfl\") pod \"neutron-operator-controller-manager-767865f676-v6rv8\" (UID: \"2d1893e2-6251-42ef-82d7-529e1f27ec4c\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.649782 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx5hx\" (UniqueName: \"kubernetes.io/projected/f86c8f25-0e6c-4911-87f8-7ff89a25a040-kube-api-access-mx5hx\") pod \"mariadb-operator-controller-manager-67ccfc9778-l4h6z\" (UID: \"f86c8f25-0e6c-4911-87f8-7ff89a25a040\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.662865 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-grt7j"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.665188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.668597 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78dc\" (UniqueName: \"kubernetes.io/projected/d2cd8f0d-237c-4db5-b2c6-31c6d99018e4-kube-api-access-l78dc\") pod \"octavia-operator-controller-manager-5b9f45d989-8b4ps\" (UID: \"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.669473 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4vk62" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.676814 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfb7g\" (UniqueName: \"kubernetes.io/projected/7f57203c-7aa8-4db7-a1f1-973a59e8fb9e-kube-api-access-qfb7g\") pod \"nova-operator-controller-manager-5d488d59fb-8glm4\" (UID: \"7f57203c-7aa8-4db7-a1f1-973a59e8fb9e\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.684503 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-grt7j"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.694259 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.696105 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.701235 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r7vwg" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.701253 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.781687 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.782895 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.782949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.783046 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78dc\" (UniqueName: \"kubernetes.io/projected/d2cd8f0d-237c-4db5-b2c6-31c6d99018e4-kube-api-access-l78dc\") pod \"octavia-operator-controller-manager-5b9f45d989-8b4ps\" (UID: \"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.783092 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghjcn\" (UniqueName: \"kubernetes.io/projected/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-kube-api-access-ghjcn\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.783105 4857 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.783185 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert podName:2fc1a575-873e-43b1-9707-bc6247ec8bbc nodeName:}" failed. No retries permitted until 2026-03-18 14:22:34.783162763 +0000 UTC m=+1338.912291220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert") pod "infra-operator-controller-manager-7b9c774f96-xjwdv" (UID: "2fc1a575-873e-43b1-9707-bc6247ec8bbc") : secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.783214 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wx9z\" (UniqueName: \"kubernetes.io/projected/ede9ac94-86ad-47ad-9358-4c051ec447cc-kube-api-access-9wx9z\") pod \"ovn-operator-controller-manager-884679f54-grt7j\" (UID: \"ede9ac94-86ad-47ad-9358-4c051ec447cc\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.809465 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.816189 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfb7g\" (UniqueName: \"kubernetes.io/projected/7f57203c-7aa8-4db7-a1f1-973a59e8fb9e-kube-api-access-qfb7g\") pod \"nova-operator-controller-manager-5d488d59fb-8glm4\" (UID: \"7f57203c-7aa8-4db7-a1f1-973a59e8fb9e\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.822597 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78dc\" (UniqueName: \"kubernetes.io/projected/d2cd8f0d-237c-4db5-b2c6-31c6d99018e4-kube-api-access-l78dc\") pod \"octavia-operator-controller-manager-5b9f45d989-8b4ps\" (UID: \"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.840598 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.860686 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.861341 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.865427 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.871830 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.889106 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j6mxt" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.890200 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.890287 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghjcn\" (UniqueName: \"kubernetes.io/projected/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-kube-api-access-ghjcn\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.890366 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wx9z\" (UniqueName: \"kubernetes.io/projected/ede9ac94-86ad-47ad-9358-4c051ec447cc-kube-api-access-9wx9z\") pod \"ovn-operator-controller-manager-884679f54-grt7j\" (UID: \"ede9ac94-86ad-47ad-9358-4c051ec447cc\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.893426 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-86872"] Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.896938 4857 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: E0318 14:22:33.897141 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert podName:f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:34.397120753 +0000 UTC m=+1338.526249210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" (UID: "f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.905175 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.905629 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.909244 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.911075 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mjxxm" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.919452 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wx9z\" (UniqueName: \"kubernetes.io/projected/ede9ac94-86ad-47ad-9358-4c051ec447cc-kube-api-access-9wx9z\") pod \"ovn-operator-controller-manager-884679f54-grt7j\" (UID: \"ede9ac94-86ad-47ad-9358-4c051ec447cc\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.920446 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghjcn\" (UniqueName: \"kubernetes.io/projected/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-kube-api-access-ghjcn\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.935519 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.937655 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.943501 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k92rm" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.946730 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.949861 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-86872"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.959522 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.989462 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j"] Mar 18 14:22:33 crc kubenswrapper[4857]: I0318 14:22:33.994501 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7q5\" (UniqueName: \"kubernetes.io/projected/ffdcecae-8dae-48b2-84d8-73deac76eeca-kube-api-access-xr7q5\") pod \"placement-operator-controller-manager-5784578c99-nqn4p\" (UID: \"ffdcecae-8dae-48b2-84d8-73deac76eeca\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.004458 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.004492 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.005360 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.005892 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.010338 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-88hxg" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.010945 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6hc85" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.031829 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.085866 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.087059 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.090029 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.094857 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.095188 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-q6nd5" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.095350 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.107252 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.109350 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrmq\" (UniqueName: \"kubernetes.io/projected/32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d-kube-api-access-9wrmq\") pod \"swift-operator-controller-manager-c674c5965-86872\" (UID: \"32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.109734 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrxq\" (UniqueName: \"kubernetes.io/projected/bdf23497-4141-4f8f-859a-0d1e4f8c80f7-kube-api-access-nlrxq\") pod \"telemetry-operator-controller-manager-5b79d7bc79-hmbhp\" (UID: \"bdf23497-4141-4f8f-859a-0d1e4f8c80f7\") " pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.109815 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7q5\" (UniqueName: \"kubernetes.io/projected/ffdcecae-8dae-48b2-84d8-73deac76eeca-kube-api-access-xr7q5\") pod \"placement-operator-controller-manager-5784578c99-nqn4p\" (UID: \"ffdcecae-8dae-48b2-84d8-73deac76eeca\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.134293 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.135058 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7q5\" (UniqueName: \"kubernetes.io/projected/ffdcecae-8dae-48b2-84d8-73deac76eeca-kube-api-access-xr7q5\") pod \"placement-operator-controller-manager-5784578c99-nqn4p\" (UID: \"ffdcecae-8dae-48b2-84d8-73deac76eeca\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.241023 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.242826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.242865 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnv9\" (UniqueName: \"kubernetes.io/projected/18b73b64-9eec-426b-86eb-6a1045a9d25c-kube-api-access-wqnv9\") pod \"watcher-operator-controller-manager-6c4d75f7f9-fjnbb\" (UID: \"18b73b64-9eec-426b-86eb-6a1045a9d25c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.242929 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvs2f\" (UniqueName: \"kubernetes.io/projected/bf950907-821d-4d28-a563-f9865d7df7f0-kube-api-access-zvs2f\") pod \"test-operator-controller-manager-5c5cb9c4d7-qpr5j\" (UID: \"bf950907-821d-4d28-a563-f9865d7df7f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.242978 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrxq\" (UniqueName: \"kubernetes.io/projected/bdf23497-4141-4f8f-859a-0d1e4f8c80f7-kube-api-access-nlrxq\") pod \"telemetry-operator-controller-manager-5b79d7bc79-hmbhp\" (UID: \"bdf23497-4141-4f8f-859a-0d1e4f8c80f7\") " pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.243010 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx7f\" (UniqueName: \"kubernetes.io/projected/cf688963-c59d-4667-8589-150c82a1e4d3-kube-api-access-vzx7f\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.243072 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.243096 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrmq\" (UniqueName: \"kubernetes.io/projected/32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d-kube-api-access-9wrmq\") pod \"swift-operator-controller-manager-c674c5965-86872\" (UID: \"32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.245515 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.246898 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.265638 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-hdc2r" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.276170 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrxq\" (UniqueName: \"kubernetes.io/projected/bdf23497-4141-4f8f-859a-0d1e4f8c80f7-kube-api-access-nlrxq\") pod \"telemetry-operator-controller-manager-5b79d7bc79-hmbhp\" (UID: \"bdf23497-4141-4f8f-859a-0d1e4f8c80f7\") " pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.276407 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrmq\" (UniqueName: \"kubernetes.io/projected/32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d-kube-api-access-9wrmq\") pod \"swift-operator-controller-manager-c674c5965-86872\" (UID: \"32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.276480 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.349614 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.350008 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.350041 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnv9\" (UniqueName: \"kubernetes.io/projected/18b73b64-9eec-426b-86eb-6a1045a9d25c-kube-api-access-wqnv9\") pod \"watcher-operator-controller-manager-6c4d75f7f9-fjnbb\" (UID: \"18b73b64-9eec-426b-86eb-6a1045a9d25c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.350230 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvs2f\" (UniqueName: \"kubernetes.io/projected/bf950907-821d-4d28-a563-f9865d7df7f0-kube-api-access-zvs2f\") pod \"test-operator-controller-manager-5c5cb9c4d7-qpr5j\" (UID: \"bf950907-821d-4d28-a563-f9865d7df7f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.350397 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx7f\" (UniqueName: \"kubernetes.io/projected/cf688963-c59d-4667-8589-150c82a1e4d3-kube-api-access-vzx7f\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.351793 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.351851 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:34.851832663 +0000 UTC m=+1338.980961120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.352121 4857 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.352182 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:34.852162241 +0000 UTC m=+1338.981290778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "metrics-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.368994 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnv9\" (UniqueName: \"kubernetes.io/projected/18b73b64-9eec-426b-86eb-6a1045a9d25c-kube-api-access-wqnv9\") pod \"watcher-operator-controller-manager-6c4d75f7f9-fjnbb\" (UID: \"18b73b64-9eec-426b-86eb-6a1045a9d25c\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.372609 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx7f\" (UniqueName: \"kubernetes.io/projected/cf688963-c59d-4667-8589-150c82a1e4d3-kube-api-access-vzx7f\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.374903 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvs2f\" (UniqueName: \"kubernetes.io/projected/bf950907-821d-4d28-a563-f9865d7df7f0-kube-api-access-zvs2f\") pod \"test-operator-controller-manager-5c5cb9c4d7-qpr5j\" (UID: \"bf950907-821d-4d28-a563-f9865d7df7f0\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.455624 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98v2c\" (UniqueName: \"kubernetes.io/projected/d992ef23-4762-4349-b1e4-9f6c562a75ac-kube-api-access-98v2c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8g8kw\" (UID: \"d992ef23-4762-4349-b1e4-9f6c562a75ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.455906 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.458241 4857 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.458315 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert podName:f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:35.458289142 +0000 UTC m=+1339.587417599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" (UID: "f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.514154 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.519595 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.536469 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.555187 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.557592 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98v2c\" (UniqueName: \"kubernetes.io/projected/d992ef23-4762-4349-b1e4-9f6c562a75ac-kube-api-access-98v2c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8g8kw\" (UID: \"d992ef23-4762-4349-b1e4-9f6c562a75ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.564526 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.565551 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d"] Mar 18 14:22:34 crc kubenswrapper[4857]: W0318 14:22:34.578123 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73a9b06c_5f5c_46f7_9548_28c5a9513a95.slice/crio-f6800a15400ca05e3dacfc2c5c9e62566956c2ccefbd65630a2a1117bfa5cc20 WatchSource:0}: Error finding container f6800a15400ca05e3dacfc2c5c9e62566956c2ccefbd65630a2a1117bfa5cc20: Status 404 returned error can't find the container with id f6800a15400ca05e3dacfc2c5c9e62566956c2ccefbd65630a2a1117bfa5cc20 Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.578324 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98v2c\" (UniqueName: \"kubernetes.io/projected/d992ef23-4762-4349-b1e4-9f6c562a75ac-kube-api-access-98v2c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8g8kw\" (UID: \"d992ef23-4762-4349-b1e4-9f6c562a75ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.591324 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.660494 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.834919 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" event={"ID":"8ffb9263-05b9-447d-a332-31f5f3312ea9","Type":"ContainerStarted","Data":"63b7445eb74376e879175e89ab5cd6f41fd1336ae057a8ca48ee40646d5b7d9a"} Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.836790 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" event={"ID":"b876d788-10af-45fb-95e6-37e7e127249f","Type":"ContainerStarted","Data":"8c11d631c6143f4523033fa3993cd5a58e0f5d64b9e6a76061cac4d1452bfb65"} Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.844789 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" event={"ID":"73a9b06c-5f5c-46f7-9548-28c5a9513a95","Type":"ContainerStarted","Data":"f6800a15400ca05e3dacfc2c5c9e62566956c2ccefbd65630a2a1117bfa5cc20"} Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.869708 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.869882 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.870000 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870096 4857 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870172 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870250 4857 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870179 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert podName:2fc1a575-873e-43b1-9707-bc6247ec8bbc nodeName:}" failed. No retries permitted until 2026-03-18 14:22:36.870158396 +0000 UTC m=+1340.999286853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert") pod "infra-operator-controller-manager-7b9c774f96-xjwdv" (UID: "2fc1a575-873e-43b1-9707-bc6247ec8bbc") : secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870288 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:35.870272109 +0000 UTC m=+1339.999400566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: E0318 14:22:34.870298 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:35.87029345 +0000 UTC m=+1339.999421907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "metrics-server-cert" not found Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.975793 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f"] Mar 18 14:22:34 crc kubenswrapper[4857]: I0318 14:22:34.987921 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv"] Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.002360 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh"] Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.022727 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t"] Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.029661 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8"] Mar 18 14:22:35 crc kubenswrapper[4857]: W0318 14:22:35.061245 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56663366_8771_43d4_b5df_ef9b84b90a74.slice/crio-6a08a38e29726ea7a90f6164f84c784e0f0f7269b4891287dd17854ee741fea3 WatchSource:0}: Error finding container 6a08a38e29726ea7a90f6164f84c784e0f0f7269b4891287dd17854ee741fea3: Status 404 returned error can't find the container with id 6a08a38e29726ea7a90f6164f84c784e0f0f7269b4891287dd17854ee741fea3 Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.497600 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.497938 4857 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.497993 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert podName:f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:37.497978392 +0000 UTC m=+1341.627106849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" (UID: "f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.530605 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4"] Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.555173 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z"] Mar 18 14:22:35 crc kubenswrapper[4857]: W0318 14:22:35.567328 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f57203c_7aa8_4db7_a1f1_973a59e8fb9e.slice/crio-78c7deeb17e77b138254271695fb8a701f12dc67f52691cb43d585e1179d8eb1 WatchSource:0}: Error finding container 78c7deeb17e77b138254271695fb8a701f12dc67f52691cb43d585e1179d8eb1: Status 404 returned error can't find the container with id 78c7deeb17e77b138254271695fb8a701f12dc67f52691cb43d585e1179d8eb1 Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.579087 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2"] Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.587626 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv"] Mar 18 14:22:35 crc kubenswrapper[4857]: W0318 14:22:35.626865 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod633285e4_04be_48d6_a496_642aa673be88.slice/crio-3c6d4df4d8a5b7e16d85038e08e2fbd5471fe48e86ad17c015143b9df0cfac89 WatchSource:0}: Error finding container 3c6d4df4d8a5b7e16d85038e08e2fbd5471fe48e86ad17c015143b9df0cfac89: Status 404 returned error can't find the container with id 3c6d4df4d8a5b7e16d85038e08e2fbd5471fe48e86ad17c015143b9df0cfac89 Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.866930 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" event={"ID":"2d1893e2-6251-42ef-82d7-529e1f27ec4c","Type":"ContainerStarted","Data":"8c2bcd030857c9dacf36c244568232c0da1061e0ee9e8f756b7ba1fb17d0b29e"} Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.869183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" event={"ID":"cffafd39-a112-46ab-becf-ad58facd5712","Type":"ContainerStarted","Data":"beb1f3eefa9d952e7da8685359b36d45e78ab53ebeb7a3652188147adb00592c"} Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.893063 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" event={"ID":"7f57203c-7aa8-4db7-a1f1-973a59e8fb9e","Type":"ContainerStarted","Data":"78c7deeb17e77b138254271695fb8a701f12dc67f52691cb43d585e1179d8eb1"} Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.903079 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" event={"ID":"633285e4-04be-48d6-a496-642aa673be88","Type":"ContainerStarted","Data":"3c6d4df4d8a5b7e16d85038e08e2fbd5471fe48e86ad17c015143b9df0cfac89"} Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.920846 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" event={"ID":"56663366-8771-43d4-b5df-ef9b84b90a74","Type":"ContainerStarted","Data":"6a08a38e29726ea7a90f6164f84c784e0f0f7269b4891287dd17854ee741fea3"} Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.928734 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.931626 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.931959 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.932035 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:37.932015177 +0000 UTC m=+1342.061143634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.932047 4857 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: E0318 14:22:35.932130 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:37.93211274 +0000 UTC m=+1342.061241197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "metrics-server-cert" not found Mar 18 14:22:35 crc kubenswrapper[4857]: I0318 14:22:35.945076 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" event={"ID":"f86c8f25-0e6c-4911-87f8-7ff89a25a040","Type":"ContainerStarted","Data":"bdf7b181e3b09f777be67b26bab2e00807e5f4e1269b22a153d3858b7b088136"} Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.029262 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" event={"ID":"d567742c-e8c4-4c28-9aae-afb3527cd915","Type":"ContainerStarted","Data":"c9e340447636f221d82862d364773ad31c75f3ced260dcff3425bfb33040bc19"} Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.039248 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" event={"ID":"e160f13b-785a-46a2-adb4-fa92ce7c6ab7","Type":"ContainerStarted","Data":"94b7f39699942c0fea1b32f634c3946f7f577d70f99038ad2bfa0871483bbeed"} Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.061582 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" event={"ID":"01c6ffec-b474-4bfb-a282-484214bea129","Type":"ContainerStarted","Data":"3b46206d7844f23f0e571a93da2e218fdc99d185f7bb5a3dae589c8b3e60d719"} Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.232333 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-grt7j"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.252619 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.297355 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb"] Mar 18 14:22:36 crc kubenswrapper[4857]: W0318 14:22:36.298070 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podede9ac94_86ad_47ad_9358_4c051ec447cc.slice/crio-b63c87a74c76df2be2d3880b4ed307104d906493a8abff13c43f97862702afac WatchSource:0}: Error finding container b63c87a74c76df2be2d3880b4ed307104d906493a8abff13c43f97862702afac: Status 404 returned error can't find the container with id b63c87a74c76df2be2d3880b4ed307104d906493a8abff13c43f97862702afac Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.326492 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps"] Mar 18 14:22:36 crc kubenswrapper[4857]: W0318 14:22:36.332534 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffdcecae_8dae_48b2_84d8_73deac76eeca.slice/crio-d1c282c7455a27551c7a159adb3883c1c673aa5eaba3729a1c47406bc0bf81da WatchSource:0}: Error finding container d1c282c7455a27551c7a159adb3883c1c673aa5eaba3729a1c47406bc0bf81da: Status 404 returned error can't find the container with id d1c282c7455a27551c7a159adb3883c1c673aa5eaba3729a1c47406bc0bf81da Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.343001 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.366779 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.408395 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-86872"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.445747 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw"] Mar 18 14:22:36 crc kubenswrapper[4857]: I0318 14:22:36.955076 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:36 crc kubenswrapper[4857]: E0318 14:22:36.955307 4857 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:36 crc kubenswrapper[4857]: E0318 14:22:36.955362 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert podName:2fc1a575-873e-43b1-9707-bc6247ec8bbc nodeName:}" failed. No retries permitted until 2026-03-18 14:22:40.955346555 +0000 UTC m=+1345.084475012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert") pod "infra-operator-controller-manager-7b9c774f96-xjwdv" (UID: "2fc1a575-873e-43b1-9707-bc6247ec8bbc") : secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.114164 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" event={"ID":"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4","Type":"ContainerStarted","Data":"b834560c703b42468e6ecb1f8d90dee4006ff95e65867f0d3c8d61faa9656dd4"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.144866 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" event={"ID":"18b73b64-9eec-426b-86eb-6a1045a9d25c","Type":"ContainerStarted","Data":"e2ba9f2372a946a436bf38c9621db78bab9f2ecaf8619ab7f15e29ead9ec1d01"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.159636 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" event={"ID":"d992ef23-4762-4349-b1e4-9f6c562a75ac","Type":"ContainerStarted","Data":"fcd676915ff4850b1cd4c5fe0d81e1b5dc2941d2bf97ce190f7cdba9d54c2113"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.310821 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" event={"ID":"ede9ac94-86ad-47ad-9358-4c051ec447cc","Type":"ContainerStarted","Data":"b63c87a74c76df2be2d3880b4ed307104d906493a8abff13c43f97862702afac"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.310879 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" event={"ID":"ffdcecae-8dae-48b2-84d8-73deac76eeca","Type":"ContainerStarted","Data":"d1c282c7455a27551c7a159adb3883c1c673aa5eaba3729a1c47406bc0bf81da"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.310895 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" event={"ID":"bf950907-821d-4d28-a563-f9865d7df7f0","Type":"ContainerStarted","Data":"b746e38f9e9291949b2f2e689d58871cc8b0a0306f692c89b2f45d2951cc5fdd"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.310909 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" event={"ID":"bdf23497-4141-4f8f-859a-0d1e4f8c80f7","Type":"ContainerStarted","Data":"0d94d992e5e3f43388567214de8ccda85c33302823bf622088cb1b8128afb706"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.310922 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" event={"ID":"32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d","Type":"ContainerStarted","Data":"9f2dfddb70e441e733dc660b0a4b5a9301522f6cb48de00c3d05e8154589fd43"} Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.582033 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.582237 4857 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.582290 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert podName:f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:41.582274179 +0000 UTC m=+1345.711402636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" (UID: "f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.989851 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:37 crc kubenswrapper[4857]: I0318 14:22:37.990026 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.990293 4857 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.990361 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:41.990341192 +0000 UTC m=+1346.119469649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "metrics-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.990653 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:37 crc kubenswrapper[4857]: E0318 14:22:37.990821 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:41.990739082 +0000 UTC m=+1346.119867619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:40 crc kubenswrapper[4857]: I0318 14:22:40.969107 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:40 crc kubenswrapper[4857]: E0318 14:22:40.969391 4857 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:40 crc kubenswrapper[4857]: E0318 14:22:40.969558 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert podName:2fc1a575-873e-43b1-9707-bc6247ec8bbc nodeName:}" failed. No retries permitted until 2026-03-18 14:22:48.969536472 +0000 UTC m=+1353.098664929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert") pod "infra-operator-controller-manager-7b9c774f96-xjwdv" (UID: "2fc1a575-873e-43b1-9707-bc6247ec8bbc") : secret "infra-operator-webhook-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: I0318 14:22:41.683863 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.684046 4857 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.684112 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert podName:f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:49.684092415 +0000 UTC m=+1353.813220862 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" (UID: "f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: I0318 14:22:41.992491 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:41 crc kubenswrapper[4857]: I0318 14:22:41.992650 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.992739 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.992838 4857 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.992869 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:49.992843799 +0000 UTC m=+1354.121972246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:41 crc kubenswrapper[4857]: E0318 14:22:41.992917 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:22:49.992896541 +0000 UTC m=+1354.122024998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "metrics-server-cert" not found Mar 18 14:22:48 crc kubenswrapper[4857]: I0318 14:22:48.978733 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:48 crc kubenswrapper[4857]: I0318 14:22:48.985077 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fc1a575-873e-43b1-9707-bc6247ec8bbc-cert\") pod \"infra-operator-controller-manager-7b9c774f96-xjwdv\" (UID: \"2fc1a575-873e-43b1-9707-bc6247ec8bbc\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:49 crc kubenswrapper[4857]: I0318 14:22:49.253268 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:22:49 crc kubenswrapper[4857]: I0318 14:22:49.694024 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:49 crc kubenswrapper[4857]: I0318 14:22:49.701222 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jcmxv\" (UID: \"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:49 crc kubenswrapper[4857]: I0318 14:22:49.743102 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:22:49 crc kubenswrapper[4857]: I0318 14:22:49.999013 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:50 crc kubenswrapper[4857]: I0318 14:22:49.999459 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:50 crc kubenswrapper[4857]: E0318 14:22:49.999656 4857 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 14:22:50 crc kubenswrapper[4857]: E0318 14:22:49.999772 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs podName:cf688963-c59d-4667-8589-150c82a1e4d3 nodeName:}" failed. No retries permitted until 2026-03-18 14:23:05.999732299 +0000 UTC m=+1370.128860756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs") pod "openstack-operator-controller-manager-f84d7fd4f-mpg2d" (UID: "cf688963-c59d-4667-8589-150c82a1e4d3") : secret "webhook-server-cert" not found Mar 18 14:22:50 crc kubenswrapper[4857]: I0318 14:22:50.004081 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-metrics-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:22:53 crc kubenswrapper[4857]: E0318 14:22:53.948909 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" Mar 18 14:22:53 crc kubenswrapper[4857]: E0318 14:22:53.950669 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qfb7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5d488d59fb-8glm4_openstack-operators(7f57203c-7aa8-4db7-a1f1-973a59e8fb9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:53 crc kubenswrapper[4857]: E0318 14:22:53.952096 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podUID="7f57203c-7aa8-4db7-a1f1-973a59e8fb9e" Mar 18 14:22:54 crc kubenswrapper[4857]: E0318 14:22:54.520636 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podUID="7f57203c-7aa8-4db7-a1f1-973a59e8fb9e" Mar 18 14:22:54 crc kubenswrapper[4857]: E0318 14:22:54.732860 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" Mar 18 14:22:54 crc kubenswrapper[4857]: E0318 14:22:54.733516 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xr7q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-nqn4p_openstack-operators(ffdcecae-8dae-48b2-84d8-73deac76eeca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:54 crc kubenswrapper[4857]: E0318 14:22:54.734869 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" Mar 18 14:22:55 crc kubenswrapper[4857]: E0318 14:22:55.595972 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" Mar 18 14:22:56 crc kubenswrapper[4857]: E0318 14:22:56.382057 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" Mar 18 14:22:56 crc kubenswrapper[4857]: E0318 14:22:56.382355 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mx5hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67ccfc9778-l4h6z_openstack-operators(f86c8f25-0e6c-4911-87f8-7ff89a25a040): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:56 crc kubenswrapper[4857]: E0318 14:22:56.383636 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" Mar 18 14:22:56 crc kubenswrapper[4857]: E0318 14:22:56.612252 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" Mar 18 14:22:57 crc kubenswrapper[4857]: I0318 14:22:57.088652 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:22:57 crc kubenswrapper[4857]: I0318 14:22:57.088835 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:22:57 crc kubenswrapper[4857]: E0318 14:22:57.606367 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" Mar 18 14:22:57 crc kubenswrapper[4857]: E0318 14:22:57.606778 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4vpfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-v6rv8_openstack-operators(2d1893e2-6251-42ef-82d7-529e1f27ec4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:57 crc kubenswrapper[4857]: E0318 14:22:57.607966 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" Mar 18 14:22:57 crc kubenswrapper[4857]: E0318 14:22:57.623959 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" Mar 18 14:22:58 crc kubenswrapper[4857]: E0318 14:22:58.966506 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d" Mar 18 14:22:58 crc kubenswrapper[4857]: E0318 14:22:58.967262 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9d4p4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-79df6bcc97-dmrdv_openstack-operators(e160f13b-785a-46a2-adb4-fa92ce7c6ab7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:58 crc kubenswrapper[4857]: E0318 14:22:58.968463 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" Mar 18 14:22:59 crc kubenswrapper[4857]: E0318 14:22:59.639052 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:76a1cde9f29fb39ed715b06be16adb803b9a2e24d68acb369911c0a88e33bc7d\\\"\"" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" Mar 18 14:22:59 crc kubenswrapper[4857]: E0318 14:22:59.781345 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900" Mar 18 14:22:59 crc kubenswrapper[4857]: E0318 14:22:59.781559 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drhq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-67dd5f86f5-fvz4f_openstack-operators(cffafd39-a112-46ab-becf-ad58facd5712): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:22:59 crc kubenswrapper[4857]: E0318 14:22:59.782802 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" Mar 18 14:23:00 crc kubenswrapper[4857]: E0318 14:23:00.650563 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:c6ef5db244d874430a56c3cc9d27662e4bd57cdaa489e1f6059abcacf3aa0900\\\"\"" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" Mar 18 14:23:01 crc kubenswrapper[4857]: E0318 14:23:01.104070 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad" Mar 18 14:23:01 crc kubenswrapper[4857]: E0318 14:23:01.104352 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7522m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-588d4d986b-ptv8b_openstack-operators(8ffb9263-05b9-447d-a332-31f5f3312ea9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:23:01 crc kubenswrapper[4857]: E0318 14:23:01.106082 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" Mar 18 14:23:01 crc kubenswrapper[4857]: E0318 14:23:01.763472 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:12841b27173f5f1beeb83112e057c8753f4cf411f583fba4f0610fac0f60b7ad\\\"\"" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" Mar 18 14:23:02 crc kubenswrapper[4857]: E0318 14:23:02.047669 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" Mar 18 14:23:02 crc kubenswrapper[4857]: E0318 14:23:02.047904 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wn2h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-8464cc45fb-fqnq2_openstack-operators(01c6ffec-b474-4bfb-a282-484214bea129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:23:02 crc kubenswrapper[4857]: E0318 14:23:02.049084 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" Mar 18 14:23:02 crc kubenswrapper[4857]: E0318 14:23:02.775171 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.051630 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.051824 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l78dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5b9f45d989-8b4ps_openstack-operators(d2cd8f0d-237c-4db5-b2c6-31c6d99018e4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.052928 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podUID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.243431 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/openstack-k8s-operators/telemetry-operator:15c2ffcfe08e13a1dec28232b4ee653042564ac3" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.243793 4857 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.194:5001/openstack-k8s-operators/telemetry-operator:15c2ffcfe08e13a1dec28232b4ee653042564ac3" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.243971 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.194:5001/openstack-k8s-operators/telemetry-operator:15c2ffcfe08e13a1dec28232b4ee653042564ac3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nlrxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5b79d7bc79-hmbhp_openstack-operators(bdf23497-4141-4f8f-859a-0d1e4f8c80f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.245136 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.713018 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv"] Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.722444 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv"] Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.814119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" event={"ID":"633285e4-04be-48d6-a496-642aa673be88","Type":"ContainerStarted","Data":"fa31803cc7d28993fa4218120a624af02801bc314a592cdd938ace4797c7b004"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.814665 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.823899 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" event={"ID":"d567742c-e8c4-4c28-9aae-afb3527cd915","Type":"ContainerStarted","Data":"2aa75d7fc73361e725995b470aafe4a651cfb74cde1c4c9e74b4877589f27b37"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.824419 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.830349 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" event={"ID":"18b73b64-9eec-426b-86eb-6a1045a9d25c","Type":"ContainerStarted","Data":"20780c6374f596a258d4c47de8465a8a0bc7b00d07a664336a1d0f06f8cbbea7"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.830440 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.837453 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" event={"ID":"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4","Type":"ContainerStarted","Data":"82a686bfc92fb6c05a3198db27c05b307c70eb43e2f597ac717fff73e0977c17"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.849966 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" podStartSLOduration=4.269211898 podStartE2EDuration="31.84992348s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.651707964 +0000 UTC m=+1339.780836421" lastFinishedPulling="2026-03-18 14:23:03.232419546 +0000 UTC m=+1367.361548003" observedRunningTime="2026-03-18 14:23:03.845461968 +0000 UTC m=+1367.974590425" watchObservedRunningTime="2026-03-18 14:23:03.84992348 +0000 UTC m=+1367.979051937" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.851013 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" event={"ID":"ede9ac94-86ad-47ad-9358-4c051ec447cc","Type":"ContainerStarted","Data":"457901d86c44bf3b0d5a799bf67807ff3475bfd91c12a052c45a73340aa4485c"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.852039 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.856022 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" event={"ID":"bf950907-821d-4d28-a563-f9865d7df7f0","Type":"ContainerStarted","Data":"6c877b244bcedb5451281e75fbe4208dddf338b59a06eaf005e1e4fef120e741"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.857039 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.870251 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" event={"ID":"56663366-8771-43d4-b5df-ef9b84b90a74","Type":"ContainerStarted","Data":"e82b20c999cdd80859968dea5bd5f54e4a40d1d8fe8c9665c3e21977c6ac7c21"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.871092 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.874833 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" podStartSLOduration=3.6605687639999998 podStartE2EDuration="31.874802964s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:34.996987904 +0000 UTC m=+1339.126116361" lastFinishedPulling="2026-03-18 14:23:03.211222104 +0000 UTC m=+1367.340350561" observedRunningTime="2026-03-18 14:23:03.864949417 +0000 UTC m=+1367.994077874" watchObservedRunningTime="2026-03-18 14:23:03.874802964 +0000 UTC m=+1368.003931421" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.880936 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" event={"ID":"32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d","Type":"ContainerStarted","Data":"9ab309f9d8a18847535aeb8f5b31b02b5dfb3bc4e8690c101de196789c56db33"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.882012 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.883687 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" event={"ID":"b876d788-10af-45fb-95e6-37e7e127249f","Type":"ContainerStarted","Data":"30a932f460decd94e76281991e7ac6d5e59a2155679812a35ddbfb3de2a7e621"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.884343 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.885193 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" event={"ID":"2fc1a575-873e-43b1-9707-bc6247ec8bbc","Type":"ContainerStarted","Data":"409cc014b9a03b4a6db02b49280760500f5e39af548a99779025f93fd9a99d44"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.887293 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" event={"ID":"73a9b06c-5f5c-46f7-9548-28c5a9513a95","Type":"ContainerStarted","Data":"4359e865bc867c78f9f49f3b23b1ba349aa7c1237c1da45aae56236b8afa8b26"} Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.887329 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.891097 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podUID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" Mar 18 14:23:03 crc kubenswrapper[4857]: E0318 14:23:03.891152 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.194:5001/openstack-k8s-operators/telemetry-operator:15c2ffcfe08e13a1dec28232b4ee653042564ac3\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.899846 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podStartSLOduration=4.002339245 podStartE2EDuration="30.899827593s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.33422882 +0000 UTC m=+1340.463357277" lastFinishedPulling="2026-03-18 14:23:03.231717168 +0000 UTC m=+1367.360845625" observedRunningTime="2026-03-18 14:23:03.898561341 +0000 UTC m=+1368.027689798" watchObservedRunningTime="2026-03-18 14:23:03.899827593 +0000 UTC m=+1368.028956050" Mar 18 14:23:03 crc kubenswrapper[4857]: I0318 14:23:03.950437 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" podStartSLOduration=4.141701313 podStartE2EDuration="30.950412113s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.402716569 +0000 UTC m=+1340.531845026" lastFinishedPulling="2026-03-18 14:23:03.211427369 +0000 UTC m=+1367.340555826" observedRunningTime="2026-03-18 14:23:03.937795146 +0000 UTC m=+1368.066923603" watchObservedRunningTime="2026-03-18 14:23:03.950412113 +0000 UTC m=+1368.079540570" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.109134 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podStartSLOduration=3.936449589 podStartE2EDuration="32.109103837s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.067683797 +0000 UTC m=+1339.196812254" lastFinishedPulling="2026-03-18 14:23:03.240338045 +0000 UTC m=+1367.369466502" observedRunningTime="2026-03-18 14:23:04.03273963 +0000 UTC m=+1368.161868087" watchObservedRunningTime="2026-03-18 14:23:04.109103837 +0000 UTC m=+1368.238232294" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.202758 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podStartSLOduration=4.300800036 podStartE2EDuration="31.202722977s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.309527219 +0000 UTC m=+1340.438655686" lastFinishedPulling="2026-03-18 14:23:03.21145016 +0000 UTC m=+1367.340578627" observedRunningTime="2026-03-18 14:23:04.121811546 +0000 UTC m=+1368.250940003" watchObservedRunningTime="2026-03-18 14:23:04.202722977 +0000 UTC m=+1368.331851434" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.591582 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podStartSLOduration=4.782086611 podStartE2EDuration="31.59156097s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.402778631 +0000 UTC m=+1340.531907088" lastFinishedPulling="2026-03-18 14:23:03.21225299 +0000 UTC m=+1367.341381447" observedRunningTime="2026-03-18 14:23:04.35538949 +0000 UTC m=+1368.484517947" watchObservedRunningTime="2026-03-18 14:23:04.59156097 +0000 UTC m=+1368.720689417" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.696567 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" podStartSLOduration=9.47043876 podStartE2EDuration="32.696547176s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:34.366447903 +0000 UTC m=+1338.495576360" lastFinishedPulling="2026-03-18 14:22:57.592556309 +0000 UTC m=+1361.721684776" observedRunningTime="2026-03-18 14:23:04.685486428 +0000 UTC m=+1368.814614885" watchObservedRunningTime="2026-03-18 14:23:04.696547176 +0000 UTC m=+1368.825675633" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.699681 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podStartSLOduration=4.070722651 podStartE2EDuration="32.699670324s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:34.582944898 +0000 UTC m=+1338.712073355" lastFinishedPulling="2026-03-18 14:23:03.211892561 +0000 UTC m=+1367.341021028" observedRunningTime="2026-03-18 14:23:04.602531565 +0000 UTC m=+1368.731660022" watchObservedRunningTime="2026-03-18 14:23:04.699670324 +0000 UTC m=+1368.828798771" Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.905696 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" event={"ID":"d992ef23-4762-4349-b1e4-9f6c562a75ac","Type":"ContainerStarted","Data":"5f30b65211dbde8c39e649f26b5103878445992b6ec1d09a7011d0d6f7cd22b3"} Mar 18 14:23:04 crc kubenswrapper[4857]: I0318 14:23:04.997133 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8g8kw" podStartSLOduration=5.199653221 podStartE2EDuration="31.997112741s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.422843632 +0000 UTC m=+1340.551972089" lastFinishedPulling="2026-03-18 14:23:03.220303162 +0000 UTC m=+1367.349431609" observedRunningTime="2026-03-18 14:23:04.984192206 +0000 UTC m=+1369.113320663" watchObservedRunningTime="2026-03-18 14:23:04.997112741 +0000 UTC m=+1369.126241198" Mar 18 14:23:06 crc kubenswrapper[4857]: I0318 14:23:06.098667 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:23:06 crc kubenswrapper[4857]: I0318 14:23:06.105647 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/cf688963-c59d-4667-8589-150c82a1e4d3-webhook-certs\") pod \"openstack-operator-controller-manager-f84d7fd4f-mpg2d\" (UID: \"cf688963-c59d-4667-8589-150c82a1e4d3\") " pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:23:06 crc kubenswrapper[4857]: I0318 14:23:06.398064 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:23:06 crc kubenswrapper[4857]: I0318 14:23:06.908014 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d"] Mar 18 14:23:07 crc kubenswrapper[4857]: W0318 14:23:07.469122 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf688963_c59d_4667_8589_150c82a1e4d3.slice/crio-19e147daee485635396a1e725bce24f7aabccd368a0339c2d3a1cc50e307e50b WatchSource:0}: Error finding container 19e147daee485635396a1e725bce24f7aabccd368a0339c2d3a1cc50e307e50b: Status 404 returned error can't find the container with id 19e147daee485635396a1e725bce24f7aabccd368a0339c2d3a1cc50e307e50b Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.944410 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" event={"ID":"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4","Type":"ContainerStarted","Data":"1e2eb1b81a9aa740da28aceccb447206333b981331cd80d9eee81e74fb41fe4b"} Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.944950 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.949560 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" event={"ID":"cf688963-c59d-4667-8589-150c82a1e4d3","Type":"ContainerStarted","Data":"1fe74e30f24275d650bce336f1a88c2f905a7db061c6d2448b67101737099d00"} Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.949597 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" event={"ID":"cf688963-c59d-4667-8589-150c82a1e4d3","Type":"ContainerStarted","Data":"19e147daee485635396a1e725bce24f7aabccd368a0339c2d3a1cc50e307e50b"} Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.949696 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:23:07 crc kubenswrapper[4857]: I0318 14:23:07.981222 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podStartSLOduration=31.200776008 podStartE2EDuration="34.981195492s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:23:03.747629891 +0000 UTC m=+1367.876758348" lastFinishedPulling="2026-03-18 14:23:07.528049375 +0000 UTC m=+1371.657177832" observedRunningTime="2026-03-18 14:23:07.972946245 +0000 UTC m=+1372.102074722" watchObservedRunningTime="2026-03-18 14:23:07.981195492 +0000 UTC m=+1372.110323949" Mar 18 14:23:08 crc kubenswrapper[4857]: I0318 14:23:08.013623 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podStartSLOduration=35.013597645 podStartE2EDuration="35.013597645s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:23:08.008336903 +0000 UTC m=+1372.137465360" watchObservedRunningTime="2026-03-18 14:23:08.013597645 +0000 UTC m=+1372.142726102" Mar 18 14:23:09 crc kubenswrapper[4857]: I0318 14:23:09.970842 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" event={"ID":"2fc1a575-873e-43b1-9707-bc6247ec8bbc","Type":"ContainerStarted","Data":"51b05875bec405dbc5375143ade85629f8ca81766191c9cd2cc04a6e146c2eb9"} Mar 18 14:23:09 crc kubenswrapper[4857]: I0318 14:23:09.971563 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:23:09 crc kubenswrapper[4857]: I0318 14:23:09.973383 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" event={"ID":"7f57203c-7aa8-4db7-a1f1-973a59e8fb9e","Type":"ContainerStarted","Data":"ddf87455dcb50ba801e65e3612eb106bd4ded6478398a2b0bebdc2e35389ea7c"} Mar 18 14:23:09 crc kubenswrapper[4857]: I0318 14:23:09.973621 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:23:10 crc kubenswrapper[4857]: I0318 14:23:10.000467 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podStartSLOduration=32.923633267 podStartE2EDuration="38.000446558s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:23:03.764920755 +0000 UTC m=+1367.894049212" lastFinishedPulling="2026-03-18 14:23:08.841734036 +0000 UTC m=+1372.970862503" observedRunningTime="2026-03-18 14:23:09.995976446 +0000 UTC m=+1374.125104943" watchObservedRunningTime="2026-03-18 14:23:10.000446558 +0000 UTC m=+1374.129575015" Mar 18 14:23:10 crc kubenswrapper[4857]: I0318 14:23:10.021074 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podStartSLOduration=4.75890068 podStartE2EDuration="38.021050665s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.576211116 +0000 UTC m=+1339.705339573" lastFinishedPulling="2026-03-18 14:23:08.838361091 +0000 UTC m=+1372.967489558" observedRunningTime="2026-03-18 14:23:10.020655595 +0000 UTC m=+1374.149784052" watchObservedRunningTime="2026-03-18 14:23:10.021050665 +0000 UTC m=+1374.150179132" Mar 18 14:23:13 crc kubenswrapper[4857]: I0318 14:23:13.090857 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" Mar 18 14:23:13 crc kubenswrapper[4857]: I0318 14:23:13.094922 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 14:23:13 crc kubenswrapper[4857]: I0318 14:23:13.786862 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 14:23:13 crc kubenswrapper[4857]: I0318 14:23:13.865149 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 14:23:13 crc kubenswrapper[4857]: I0318 14:23:13.902308 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" Mar 18 14:23:14 crc kubenswrapper[4857]: I0318 14:23:14.138355 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 14:23:14 crc kubenswrapper[4857]: I0318 14:23:14.545000 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 14:23:14 crc kubenswrapper[4857]: I0318 14:23:14.571061 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 14:23:14 crc kubenswrapper[4857]: I0318 14:23:14.595057 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" Mar 18 14:23:16 crc kubenswrapper[4857]: I0318 14:23:16.411376 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.304952 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" event={"ID":"e160f13b-785a-46a2-adb4-fa92ce7c6ab7","Type":"ContainerStarted","Data":"8b274f12832fe107f9165a2bd3bc18a6ce056b5d57c8c584729b0493420c3c35"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.305190 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.318419 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" event={"ID":"ffdcecae-8dae-48b2-84d8-73deac76eeca","Type":"ContainerStarted","Data":"0e294ddb44b17449ae6f96ca0b815d1911415b0bd3705003c470b48b3a4e0fea"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.318778 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.320811 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" event={"ID":"01c6ffec-b474-4bfb-a282-484214bea129","Type":"ContainerStarted","Data":"8bfc8ac46dd94070241cbaf691a0be057ae3d22f492988a7ddaa65479890d343"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.321370 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.322582 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" event={"ID":"8ffb9263-05b9-447d-a332-31f5f3312ea9","Type":"ContainerStarted","Data":"e99d9611603ef760a611010e0ee098b9740dfd129122749fe75a9ddbebbac842"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.322802 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.324299 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" event={"ID":"bdf23497-4141-4f8f-859a-0d1e4f8c80f7","Type":"ContainerStarted","Data":"bcc3c4a723e6eb9e06c8f478b8a34ef6d1dec0aea4db2a033fbc62f2b911c632"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.324539 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.331517 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" event={"ID":"cffafd39-a112-46ab-becf-ad58facd5712","Type":"ContainerStarted","Data":"e8343b8a5616a01ada51bef754733ac99cb243ebb2f270da221990aeba1a0c1b"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.331968 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.349290 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" event={"ID":"f86c8f25-0e6c-4911-87f8-7ff89a25a040","Type":"ContainerStarted","Data":"35738a5d6925880f7e5050d2bd42507c76c8ae5aa4016398c1a3c1ace655a91d"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.353431 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.367461 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podStartSLOduration=3.830475226 podStartE2EDuration="45.367437517s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.000061887 +0000 UTC m=+1339.129190364" lastFinishedPulling="2026-03-18 14:23:16.537024198 +0000 UTC m=+1380.666152655" observedRunningTime="2026-03-18 14:23:17.346686216 +0000 UTC m=+1381.475814693" watchObservedRunningTime="2026-03-18 14:23:17.367437517 +0000 UTC m=+1381.496565974" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.371922 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" event={"ID":"2d1893e2-6251-42ef-82d7-529e1f27ec4c","Type":"ContainerStarted","Data":"be95252eda8469d2f4d50d369a6c25966c23d660abada3ab087e53c66928e6da"} Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.376360 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.435953 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podStartSLOduration=3.683440616 podStartE2EDuration="45.435929467s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:34.973797269 +0000 UTC m=+1339.102925726" lastFinishedPulling="2026-03-18 14:23:16.72628613 +0000 UTC m=+1380.855414577" observedRunningTime="2026-03-18 14:23:17.381656194 +0000 UTC m=+1381.510784651" watchObservedRunningTime="2026-03-18 14:23:17.435929467 +0000 UTC m=+1381.565057924" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.448706 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podStartSLOduration=4.296878186 podStartE2EDuration="44.448681567s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.371122763 +0000 UTC m=+1340.500251220" lastFinishedPulling="2026-03-18 14:23:16.522926144 +0000 UTC m=+1380.652054601" observedRunningTime="2026-03-18 14:23:17.414844838 +0000 UTC m=+1381.543973295" watchObservedRunningTime="2026-03-18 14:23:17.448681567 +0000 UTC m=+1381.577810024" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.457172 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podStartSLOduration=4.5381933740000004 podStartE2EDuration="45.45714804s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.614843841 +0000 UTC m=+1339.743972298" lastFinishedPulling="2026-03-18 14:23:16.533798507 +0000 UTC m=+1380.662926964" observedRunningTime="2026-03-18 14:23:17.452679548 +0000 UTC m=+1381.581808005" watchObservedRunningTime="2026-03-18 14:23:17.45714804 +0000 UTC m=+1381.586276497" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.485353 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podStartSLOduration=5.066145169 podStartE2EDuration="44.485325227s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.34551428 +0000 UTC m=+1340.474642737" lastFinishedPulling="2026-03-18 14:23:15.764694328 +0000 UTC m=+1379.893822795" observedRunningTime="2026-03-18 14:23:17.468952566 +0000 UTC m=+1381.598081023" watchObservedRunningTime="2026-03-18 14:23:17.485325227 +0000 UTC m=+1381.614453684" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.507429 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podStartSLOduration=3.368947542 podStartE2EDuration="45.507398481s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:34.400543319 +0000 UTC m=+1338.529671776" lastFinishedPulling="2026-03-18 14:23:16.538994258 +0000 UTC m=+1380.668122715" observedRunningTime="2026-03-18 14:23:17.499616016 +0000 UTC m=+1381.628744473" watchObservedRunningTime="2026-03-18 14:23:17.507398481 +0000 UTC m=+1381.636526938" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.516042 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podStartSLOduration=5.335445932 podStartE2EDuration="45.516025018s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.587024445 +0000 UTC m=+1339.716152892" lastFinishedPulling="2026-03-18 14:23:15.767603521 +0000 UTC m=+1379.896731978" observedRunningTime="2026-03-18 14:23:17.51570364 +0000 UTC m=+1381.644832097" watchObservedRunningTime="2026-03-18 14:23:17.516025018 +0000 UTC m=+1381.645153475" Mar 18 14:23:17 crc kubenswrapper[4857]: I0318 14:23:17.547704 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podStartSLOduration=4.083082333 podStartE2EDuration="45.547685563s" podCreationTimestamp="2026-03-18 14:22:32 +0000 UTC" firstStartedPulling="2026-03-18 14:22:35.057437772 +0000 UTC m=+1339.186566229" lastFinishedPulling="2026-03-18 14:23:16.522041002 +0000 UTC m=+1380.651169459" observedRunningTime="2026-03-18 14:23:17.54239676 +0000 UTC m=+1381.671525217" watchObservedRunningTime="2026-03-18 14:23:17.547685563 +0000 UTC m=+1381.676814020" Mar 18 14:23:19 crc kubenswrapper[4857]: I0318 14:23:19.262932 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" Mar 18 14:23:19 crc kubenswrapper[4857]: I0318 14:23:19.750714 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:23:20 crc kubenswrapper[4857]: I0318 14:23:20.401134 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" event={"ID":"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4","Type":"ContainerStarted","Data":"a464ec604042f508d8b045a49490f4658b7920b814360845da26813df92ad952"} Mar 18 14:23:20 crc kubenswrapper[4857]: I0318 14:23:20.402704 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:23:20 crc kubenswrapper[4857]: I0318 14:23:20.426078 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podStartSLOduration=4.139781574 podStartE2EDuration="47.426056349s" podCreationTimestamp="2026-03-18 14:22:33 +0000 UTC" firstStartedPulling="2026-03-18 14:22:36.421083249 +0000 UTC m=+1340.550211706" lastFinishedPulling="2026-03-18 14:23:19.707358024 +0000 UTC m=+1383.836486481" observedRunningTime="2026-03-18 14:23:20.420631642 +0000 UTC m=+1384.549760119" watchObservedRunningTime="2026-03-18 14:23:20.426056349 +0000 UTC m=+1384.555184806" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.182907 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.198791 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.249880 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.325802 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.813046 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.913539 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 14:23:23 crc kubenswrapper[4857]: I0318 14:23:23.950319 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" Mar 18 14:23:24 crc kubenswrapper[4857]: I0318 14:23:24.117399 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 14:23:24 crc kubenswrapper[4857]: I0318 14:23:24.244117 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 14:23:24 crc kubenswrapper[4857]: I0318 14:23:24.559389 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 14:23:27 crc kubenswrapper[4857]: I0318 14:23:27.039093 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:23:27 crc kubenswrapper[4857]: I0318 14:23:27.039584 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.864836 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.869074 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.873424 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.873438 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.873845 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.880292 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gglkj" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.885954 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.911084 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.913320 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.926309 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 18 14:23:43 crc kubenswrapper[4857]: I0318 14:23:43.941360 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.017825 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gfx\" (UniqueName: \"kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.017887 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.018173 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.018332 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.018402 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8h9\" (UniqueName: \"kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.310785 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.311264 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8h9\" (UniqueName: \"kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.311514 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gfx\" (UniqueName: \"kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.311668 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.311852 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.312794 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.313501 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.322635 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.645195 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8h9\" (UniqueName: \"kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9\") pod \"dnsmasq-dns-675f4bcbfc-qbw5m\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.654473 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.661394 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gfx\" (UniqueName: \"kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx\") pod \"dnsmasq-dns-78dd6ddcc-6jms7\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:44 crc kubenswrapper[4857]: I0318 14:23:44.848781 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:23:45 crc kubenswrapper[4857]: I0318 14:23:45.145127 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:23:45 crc kubenswrapper[4857]: I0318 14:23:45.340262 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:23:46 crc kubenswrapper[4857]: I0318 14:23:46.988639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" event={"ID":"774a2c87-55ef-4bf7-a34a-ed282578d470","Type":"ContainerStarted","Data":"e0a259ff385280928f90e7d0eaa6713027e3235cc7fcb0928914834987660110"} Mar 18 14:23:46 crc kubenswrapper[4857]: I0318 14:23:46.992968 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" event={"ID":"68295375-a954-4071-8855-989fac62c318","Type":"ContainerStarted","Data":"e5a6c1ec603a73422a5c45bbaff0796b991301d1a48ee358b103a49c737388cf"} Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.577788 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.601053 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.602801 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.624341 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.796081 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5t9\" (UniqueName: \"kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.796550 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.796606 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.899520 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.899687 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5t9\" (UniqueName: \"kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.899819 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.900991 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.903702 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.931426 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.939022 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5t9\" (UniqueName: \"kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9\") pod \"dnsmasq-dns-5ccc8479f9-9h9ps\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.943347 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.972173 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:23:47 crc kubenswrapper[4857]: I0318 14:23:47.974012 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.001777 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.011485 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.011584 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46ff\" (UniqueName: \"kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.011729 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.114079 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.114146 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46ff\" (UniqueName: \"kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.114170 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.115376 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.115552 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.143939 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46ff\" (UniqueName: \"kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff\") pod \"dnsmasq-dns-57d769cc4f-xstkt\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.386396 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.719824 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.789742 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.792310 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803065 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803086 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803147 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803186 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803120 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803370 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.803871 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s56f2" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.819709 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948210 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpz2x\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948268 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948306 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948344 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948371 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948399 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948415 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948451 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948507 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948539 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:48 crc kubenswrapper[4857]: I0318 14:23:48.948572 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.054580 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.054740 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.054894 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.054957 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055042 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpz2x\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055077 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055139 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055186 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055220 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055281 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.055315 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.060927 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.061372 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" event={"ID":"9e8855cb-b484-488c-8a84-1d3962dc297f","Type":"ContainerStarted","Data":"cb5dafc1fec49d0d70f585a194d657e62a88a6872794572622811efb306f7c80"} Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.062562 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.062797 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.063411 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.065155 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.071072 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.071152 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.074290 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.074411 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6e5296cdc2d629d606120081853b2f8996ebb05829b621cae3a3133c67b1a52/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.075334 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.077344 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.087451 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpz2x\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.091462 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.102704 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.105630 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110339 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110477 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110501 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j8krl" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110564 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110508 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.110566 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.114511 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.143030 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.186158 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.194169 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.196511 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.201745 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.205301 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.210294 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.218125 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.262802 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.262867 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.262955 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263041 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263081 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263111 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263162 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263224 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263354 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263395 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxl5\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.263424 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365375 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365645 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365678 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365710 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365741 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365783 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365812 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365848 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365880 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.365897 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.366117 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxl5\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.366188 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.366249 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.367546 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.368797 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369012 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369070 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369125 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369148 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx5z6\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369255 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369315 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369336 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f269c\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369643 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369811 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370300 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370341 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370343 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9a8dff8f8a45c8a9f22e8eb98987a1b501748742ba6ae6bab69a4160bd3ccc1b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370056 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.369997 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370177 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370589 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370678 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370698 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370783 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370841 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370867 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370884 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370913 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.370979 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.376486 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.380229 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.383392 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.396093 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.398772 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxl5\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.427550 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.430006 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.473849 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.473908 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.473953 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.473971 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474014 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474043 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474072 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474102 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474131 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474165 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474207 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474246 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474268 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474285 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474301 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx5z6\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474340 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474358 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f269c\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474383 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474399 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474421 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474444 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.474463 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.475796 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.476401 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.483700 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.484034 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.484222 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.484674 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.485129 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.485714 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.486212 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.488054 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.488575 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.488776 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.489400 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.489491 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.493271 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.495523 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.496274 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.496320 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5900510260eda10d1720a81f8ea5bb3416f28283122ef270378e9e5c921d5a4b/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.498400 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.499902 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.499953 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cc840d4d19da8ffddf11bfbc2594b044fc276a15e3ae8ac00eb9baebd04c7ec/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.501692 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.505496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx5z6\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.506222 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.507223 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.509185 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.524652 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f269c\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.530736 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5dgsj" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.530822 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.531111 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.532863 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.535536 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.543463 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.583682 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.586875 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.683957 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684024 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684155 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684181 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-kolla-config\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684303 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcllr\" (UniqueName: \"kubernetes.io/projected/f76ea184-35e0-4df6-8c6e-34196ccd7901-kube-api-access-bcllr\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684348 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.684469 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-default\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.793874 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.793938 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794054 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-kolla-config\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794081 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcllr\" (UniqueName: \"kubernetes.io/projected/f76ea184-35e0-4df6-8c6e-34196ccd7901-kube-api-access-bcllr\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794134 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794247 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-default\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794363 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.794420 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.797744 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-kolla-config\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.798056 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.798810 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.799696 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f76ea184-35e0-4df6-8c6e-34196ccd7901-config-data-default\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.827304 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.827692 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f76ea184-35e0-4df6-8c6e-34196ccd7901-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.840489 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.846587 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcllr\" (UniqueName: \"kubernetes.io/projected/f76ea184-35e0-4df6-8c6e-34196ccd7901-kube-api-access-bcllr\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.855638 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.946546 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:49 crc kubenswrapper[4857]: I0318 14:23:49.946590 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/672cb6fa6706acb86833617cf522b96dcbaf7a5d19b92126605d0d9aadca10b9/globalmount\"" pod="openstack/openstack-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.149139 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bc5965b9-273b-498b-9a17-5fdb6e6af0b7\") pod \"openstack-galera-0\" (UID: \"f76ea184-35e0-4df6-8c6e-34196ccd7901\") " pod="openstack/openstack-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.283692 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" event={"ID":"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6","Type":"ContainerStarted","Data":"ee4098b1891ecc3a63e8bf64891628efe3ecf87313e18d476a65d0baab96dd40"} Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.392652 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.450861 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.618389 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:23:50 crc kubenswrapper[4857]: W0318 14:23:50.760375 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ac0772_875b_4de1_8839_d7d4c90cffee.slice/crio-7eb24d41308be462f2cccfc680e27eebd3d72f7d4f58d47089a3728a7d5b712b WatchSource:0}: Error finding container 7eb24d41308be462f2cccfc680e27eebd3d72f7d4f58d47089a3728a7d5b712b: Status 404 returned error can't find the container with id 7eb24d41308be462f2cccfc680e27eebd3d72f7d4f58d47089a3728a7d5b712b Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.880681 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.882856 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.887509 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.887730 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gm4pw" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.888459 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.890445 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.901331 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.911426 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.950951 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956386 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956437 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956600 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956687 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956874 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.956927 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdvm\" (UniqueName: \"kubernetes.io/projected/f695aad9-3bb2-4529-bb2b-5c36787464c1-kube-api-access-qrdvm\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:50 crc kubenswrapper[4857]: I0318 14:23:50.957014 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.016086 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.037559 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.037766 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.040402 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.040427 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8bgcb" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.040610 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064380 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064450 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064483 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064547 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064618 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrdvm\" (UniqueName: \"kubernetes.io/projected/f695aad9-3bb2-4529-bb2b-5c36787464c1-kube-api-access-qrdvm\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.064927 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.067012 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.067075 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.070996 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.071545 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.071633 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.071616 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c8cbcd46ead74fd3ddad0e5277a29bb4db504b6a1c8d2197f0fd9f7dd84b360/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.074530 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f695aad9-3bb2-4529-bb2b-5c36787464c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.075372 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.092818 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f695aad9-3bb2-4529-bb2b-5c36787464c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.095644 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrdvm\" (UniqueName: \"kubernetes.io/projected/f695aad9-3bb2-4529-bb2b-5c36787464c1-kube-api-access-qrdvm\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.145553 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa639f92-4e4d-4fa3-bf56-267b1a2c4373\") pod \"openstack-cell1-galera-0\" (UID: \"f695aad9-3bb2-4529-bb2b-5c36787464c1\") " pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.169233 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kolla-config\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.169695 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.169729 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.169821 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-config-data\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.170372 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsqm\" (UniqueName: \"kubernetes.io/projected/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kube-api-access-8bsqm\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.275168 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kolla-config\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.274318 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kolla-config\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.275396 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.275417 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.278827 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-config-data\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.278874 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsqm\" (UniqueName: \"kubernetes.io/projected/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kube-api-access-8bsqm\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.279928 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-config-data\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.285310 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.286837 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.332798 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerStarted","Data":"83826f6e772fdebc532573e31d9113b71dfddc80ef3c32684b0eaae99ce6ccc1"} Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.337833 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsqm\" (UniqueName: \"kubernetes.io/projected/bf21e858-d9d3-448f-bc36-522cf6f7dc2d-kube-api-access-8bsqm\") pod \"memcached-0\" (UID: \"bf21e858-d9d3-448f-bc36-522cf6f7dc2d\") " pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.345123 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerStarted","Data":"7eb24d41308be462f2cccfc680e27eebd3d72f7d4f58d47089a3728a7d5b712b"} Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.346001 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.350788 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerStarted","Data":"05fe8b517771536f54ac7f77640440a6ac64214356b1fe60f95dd49ab41c31c3"} Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.377202 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.408470 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:23:51 crc kubenswrapper[4857]: I0318 14:23:51.602336 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 14:23:51 crc kubenswrapper[4857]: W0318 14:23:51.672328 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf76ea184_35e0_4df6_8c6e_34196ccd7901.slice/crio-1b0c1810a9004da559b8557e41885d3a2880f1afd5a657ed632b5bd3bc0e139b WatchSource:0}: Error finding container 1b0c1810a9004da559b8557e41885d3a2880f1afd5a657ed632b5bd3bc0e139b: Status 404 returned error can't find the container with id 1b0c1810a9004da559b8557e41885d3a2880f1afd5a657ed632b5bd3bc0e139b Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:52.378826 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerStarted","Data":"1b0c1810a9004da559b8557e41885d3a2880f1afd5a657ed632b5bd3bc0e139b"} Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:52.383720 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerStarted","Data":"33e778216fed3d6a19e183a1d38d10302b31fae6d88e402c5278fc357e2a9b70"} Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.435903 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.440162 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.449060 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kq569" Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.468143 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.563103 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7bq\" (UniqueName: \"kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq\") pod \"kube-state-metrics-0\" (UID: \"e8b53cfe-8acc-431c-be7e-b6d48ce587a8\") " pod="openstack/kube-state-metrics-0" Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.671533 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7bq\" (UniqueName: \"kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq\") pod \"kube-state-metrics-0\" (UID: \"e8b53cfe-8acc-431c-be7e-b6d48ce587a8\") " pod="openstack/kube-state-metrics-0" Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.698341 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.718215 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.733790 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7bq\" (UniqueName: \"kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq\") pod \"kube-state-metrics-0\" (UID: \"e8b53cfe-8acc-431c-be7e-b6d48ce587a8\") " pod="openstack/kube-state-metrics-0" Mar 18 14:23:53 crc kubenswrapper[4857]: I0318 14:23:53.803309 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.409551 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.413085 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.415889 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.416413 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-bk5lg" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.438594 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.517225 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.517313 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/6e52a810-35c4-49bb-a0f6-83accdb52311-kube-api-access-45dgc\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.548712 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerStarted","Data":"80d8c736e3f1958ccb18ac0e47418c67c834c15bd8f3d02fdc5953114c052136"} Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.619262 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bf21e858-d9d3-448f-bc36-522cf6f7dc2d","Type":"ContainerStarted","Data":"bd67a3f4c325c025c380780a811330c1b2b639d61525562fcc9680130b601411"} Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.624500 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.624586 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/6e52a810-35c4-49bb-a0f6-83accdb52311-kube-api-access-45dgc\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: E0318 14:23:54.625107 4857 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 18 14:23:54 crc kubenswrapper[4857]: E0318 14:23:54.625153 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert podName:6e52a810-35c4-49bb-a0f6-83accdb52311 nodeName:}" failed. No retries permitted until 2026-03-18 14:23:55.125136449 +0000 UTC m=+1419.254264906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert") pod "observability-ui-dashboards-7f87b9b85b-lwdf5" (UID: "6e52a810-35c4-49bb-a0f6-83accdb52311") : secret "observability-ui-dashboards" not found Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.647537 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.661279 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dgc\" (UniqueName: \"kubernetes.io/projected/6e52a810-35c4-49bb-a0f6-83accdb52311-kube-api-access-45dgc\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.804546 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-89866dfb6-2ckqj"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.806451 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.849067 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-89866dfb6-2ckqj"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941244 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-service-ca\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941309 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-trusted-ca-bundle\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941409 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7m2\" (UniqueName: \"kubernetes.io/projected/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-kube-api-access-gf7m2\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941439 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941575 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-oauth-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.941610 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-oauth-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.989841 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.993591 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.996735 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.996861 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.996921 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.997054 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.997299 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.997550 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 18 14:23:54 crc kubenswrapper[4857]: I0318 14:23:54.997815 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pvvpn" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.004610 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.020320 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048096 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-service-ca\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048256 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048296 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-trusted-ca-bundle\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048479 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7m2\" (UniqueName: \"kubernetes.io/projected/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-kube-api-access-gf7m2\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048532 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048680 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-oauth-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.048740 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-oauth-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.057234 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-service-ca\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.062896 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.063513 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-trusted-ca-bundle\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.058185 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-oauth-serving-cert\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.074048 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.077168 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7m2\" (UniqueName: \"kubernetes.io/projected/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-kube-api-access-gf7m2\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.079150 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20035f78-fe0d-44ce-8f03-aa1bc3bf851b-console-oauth-config\") pod \"console-89866dfb6-2ckqj\" (UID: \"20035f78-fe0d-44ce-8f03-aa1bc3bf851b\") " pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.151859 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.151929 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.151971 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152006 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152043 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152082 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152123 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152149 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152302 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfswl\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152432 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.152533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.157150 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e52a810-35c4-49bb-a0f6-83accdb52311-serving-cert\") pod \"observability-ui-dashboards-7f87b9b85b-lwdf5\" (UID: \"6e52a810-35c4-49bb-a0f6-83accdb52311\") " pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.179969 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254109 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254206 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254257 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254298 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254332 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254381 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254429 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254450 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254506 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfswl\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.254567 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.258247 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.258476 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.260945 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.273274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.274264 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.274673 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.275273 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.286083 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.298913 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfswl\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.317915 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.317959 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/375e81b5ff671f5b992332946377b9ca3c84314088961a63afa1082ad97c465d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.380322 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.566161 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.633403 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.709984 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8b53cfe-8acc-431c-be7e-b6d48ce587a8","Type":"ContainerStarted","Data":"2beb6e9b98a2b0f0644358865d91fb192d2b88e77c2e2c03ed2a5c620559396d"} Mar 18 14:23:55 crc kubenswrapper[4857]: I0318 14:23:55.901742 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-89866dfb6-2ckqj"] Mar 18 14:23:56 crc kubenswrapper[4857]: W0318 14:23:56.032992 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20035f78_fe0d_44ce_8f03_aa1bc3bf851b.slice/crio-e3cb89123bf148eaced10ef9a02c7ec2c13779aa029624a520013da0a287c101 WatchSource:0}: Error finding container e3cb89123bf148eaced10ef9a02c7ec2c13779aa029624a520013da0a287c101: Status 404 returned error can't find the container with id e3cb89123bf148eaced10ef9a02c7ec2c13779aa029624a520013da0a287c101 Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.709880 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jvjlg"] Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.711695 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.715616 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.715961 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.716098 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gl7fx" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.716223 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7z7fh"] Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.718555 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.725859 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jvjlg"] Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.755745 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7z7fh"] Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.795723 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89866dfb6-2ckqj" event={"ID":"20035f78-fe0d-44ce-8f03-aa1bc3bf851b","Type":"ContainerStarted","Data":"e3cb89123bf148eaced10ef9a02c7ec2c13779aa029624a520013da0a287c101"} Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806716 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ljk\" (UniqueName: \"kubernetes.io/projected/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-kube-api-access-d9ljk\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806797 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806867 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-ovn-controller-tls-certs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806897 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806930 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-log\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.806965 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-log-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807196 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-etc-ovs\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807289 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-lib\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807333 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-scripts\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807495 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-run\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807529 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zcs\" (UniqueName: \"kubernetes.io/projected/635e665d-2bdc-4e46-913d-0362aa4d4e3d-kube-api-access-z2zcs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/635e665d-2bdc-4e46-913d-0362aa4d4e3d-scripts\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.807813 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-combined-ca-bundle\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909686 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-log\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909786 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-log-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909829 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-etc-ovs\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909883 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-lib\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909911 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-scripts\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.909996 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-run\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910018 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2zcs\" (UniqueName: \"kubernetes.io/projected/635e665d-2bdc-4e46-913d-0362aa4d4e3d-kube-api-access-z2zcs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910060 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/635e665d-2bdc-4e46-913d-0362aa4d4e3d-scripts\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910098 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-combined-ca-bundle\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910126 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ljk\" (UniqueName: \"kubernetes.io/projected/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-kube-api-access-d9ljk\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910172 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-ovn-controller-tls-certs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910189 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910417 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-log\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910545 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910605 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-run-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910666 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-lib\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.910674 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-var-run\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.912088 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-etc-ovs\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.913024 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/635e665d-2bdc-4e46-913d-0362aa4d4e3d-scripts\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.913080 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-scripts\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.913261 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/635e665d-2bdc-4e46-913d-0362aa4d4e3d-var-log-ovn\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.919104 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-ovn-controller-tls-certs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.934273 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ljk\" (UniqueName: \"kubernetes.io/projected/583a3a2f-591c-4cb4-96d7-3f1ad08441a8-kube-api-access-d9ljk\") pod \"ovn-controller-ovs-7z7fh\" (UID: \"583a3a2f-591c-4cb4-96d7-3f1ad08441a8\") " pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.935845 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635e665d-2bdc-4e46-913d-0362aa4d4e3d-combined-ca-bundle\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:56 crc kubenswrapper[4857]: I0318 14:23:56.945190 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2zcs\" (UniqueName: \"kubernetes.io/projected/635e665d-2bdc-4e46-913d-0362aa4d4e3d-kube-api-access-z2zcs\") pod \"ovn-controller-jvjlg\" (UID: \"635e665d-2bdc-4e46-913d-0362aa4d4e3d\") " pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.039330 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.039413 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.039473 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.040445 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.040512 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37" gracePeriod=600 Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.071736 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5"] Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.092907 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.111228 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.122978 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.153431 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.159476 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.205619 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.205879 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-znsfl" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.206446 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.207462 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.208416 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216353 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216402 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216431 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-config\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216461 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7zq\" (UniqueName: \"kubernetes.io/projected/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-kube-api-access-6d7zq\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216485 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.216512 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.219805 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.238169 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326334 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326399 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-config\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326442 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d7zq\" (UniqueName: \"kubernetes.io/projected/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-kube-api-access-6d7zq\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326472 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326506 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326563 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326654 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.326710 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.327165 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.327538 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-config\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.327953 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.335241 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.339345 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.339404 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e6c2a01249a8c051c2c582bcb25d1739e3704371c5988710635025660ce8bd9e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.340971 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.362261 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d7zq\" (UniqueName: \"kubernetes.io/projected/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-kube-api-access-6d7zq\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.389895 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.427052 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d450ce9-d588-4c6c-9096-846c70c27fe8\") pod \"ovsdbserver-nb-0\" (UID: \"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0\") " pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.524714 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.847505 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37" exitCode=0 Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.847554 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37"} Mar 18 14:23:57 crc kubenswrapper[4857]: I0318 14:23:57.847607 4857 scope.go:117] "RemoveContainer" containerID="5d02ada7b61718d2758e386a863bb922baadadd5b27ecf33deb78043773cecc9" Mar 18 14:23:58 crc kubenswrapper[4857]: W0318 14:23:58.871480 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e52a810_35c4_49bb_a0f6_83accdb52311.slice/crio-8cdc9add9949b7e18470fa1369066b0c019e3d7a1b185748b03ca4dd571a1a9e WatchSource:0}: Error finding container 8cdc9add9949b7e18470fa1369066b0c019e3d7a1b185748b03ca4dd571a1a9e: Status 404 returned error can't find the container with id 8cdc9add9949b7e18470fa1369066b0c019e3d7a1b185748b03ca4dd571a1a9e Mar 18 14:23:59 crc kubenswrapper[4857]: I0318 14:23:59.808378 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jvjlg"] Mar 18 14:23:59 crc kubenswrapper[4857]: W0318 14:23:59.818917 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod635e665d_2bdc_4e46_913d_0362aa4d4e3d.slice/crio-ab62220ce183375c8fb3d4713020a310c9b27531255ab355ae134d73b651d9e8 WatchSource:0}: Error finding container ab62220ce183375c8fb3d4713020a310c9b27531255ab355ae134d73b651d9e8: Status 404 returned error can't find the container with id ab62220ce183375c8fb3d4713020a310c9b27531255ab355ae134d73b651d9e8 Mar 18 14:23:59 crc kubenswrapper[4857]: I0318 14:23:59.903042 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jvjlg" event={"ID":"635e665d-2bdc-4e46-913d-0362aa4d4e3d","Type":"ContainerStarted","Data":"ab62220ce183375c8fb3d4713020a310c9b27531255ab355ae134d73b651d9e8"} Mar 18 14:23:59 crc kubenswrapper[4857]: I0318 14:23:59.905854 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" event={"ID":"6e52a810-35c4-49bb-a0f6-83accdb52311","Type":"ContainerStarted","Data":"8cdc9add9949b7e18470fa1369066b0c019e3d7a1b185748b03ca4dd571a1a9e"} Mar 18 14:23:59 crc kubenswrapper[4857]: I0318 14:23:59.910371 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerStarted","Data":"71eacd139bf133b4eb7195a232d8c32154193a16c8f8f51dd2aff958a8ef0f8c"} Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.174847 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564064-jhjx7"] Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.211494 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564064-jhjx7"] Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.211679 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.214378 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.216096 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.216391 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.234154 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.236596 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.241253 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.241614 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.242166 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rqkn4" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.242408 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.247577 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.309325 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5rr6\" (UniqueName: \"kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6\") pod \"auto-csr-approver-29564064-jhjx7\" (UID: \"eb34e902-9484-4d17-97ab-77985e7714e4\") " pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.412949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.413121 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpgs\" (UniqueName: \"kubernetes.io/projected/82585f8a-7069-47cb-b10e-2c83903ddc08-kube-api-access-htpgs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.413320 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.413457 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.413577 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5rr6\" (UniqueName: \"kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6\") pod \"auto-csr-approver-29564064-jhjx7\" (UID: \"eb34e902-9484-4d17-97ab-77985e7714e4\") " pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.413798 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.419833 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-config\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.419891 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.420267 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.447396 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5rr6\" (UniqueName: \"kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6\") pod \"auto-csr-approver-29564064-jhjx7\" (UID: \"eb34e902-9484-4d17-97ab-77985e7714e4\") " pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523436 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523517 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpgs\" (UniqueName: \"kubernetes.io/projected/82585f8a-7069-47cb-b10e-2c83903ddc08-kube-api-access-htpgs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523557 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523626 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523783 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523920 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-config\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.523961 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.524114 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.526316 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.527278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82585f8a-7069-47cb-b10e-2c83903ddc08-config\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.528963 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.531006 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.531041 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8b05bc508356c30a5ac90bf6e2dcd96b50e1205e1a50d7854b5b6b6ca7dd410/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.539406 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.541775 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.544504 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.554870 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82585f8a-7069-47cb-b10e-2c83903ddc08-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.556268 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpgs\" (UniqueName: \"kubernetes.io/projected/82585f8a-7069-47cb-b10e-2c83903ddc08-kube-api-access-htpgs\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.607998 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d84f2819-0c01-470f-a3f5-4feb74048d78\") pod \"ovsdbserver-sb-0\" (UID: \"82585f8a-7069-47cb-b10e-2c83903ddc08\") " pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.939783 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.966247 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89866dfb6-2ckqj" event={"ID":"20035f78-fe0d-44ce-8f03-aa1bc3bf851b","Type":"ContainerStarted","Data":"924bc4bdda5e0ae75a1ea2d53e86bee0bd9f5c6bf875cd098c4107cb2fc05d4e"} Mar 18 14:24:00 crc kubenswrapper[4857]: I0318 14:24:00.997324 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-89866dfb6-2ckqj" podStartSLOduration=6.997299009 podStartE2EDuration="6.997299009s" podCreationTimestamp="2026-03-18 14:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:24:00.988181799 +0000 UTC m=+1425.117310256" watchObservedRunningTime="2026-03-18 14:24:00.997299009 +0000 UTC m=+1425.126427466" Mar 18 14:24:01 crc kubenswrapper[4857]: I0318 14:24:01.511697 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 14:24:01 crc kubenswrapper[4857]: I0318 14:24:01.591204 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564064-jhjx7"] Mar 18 14:24:02 crc kubenswrapper[4857]: I0318 14:24:02.322829 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7z7fh"] Mar 18 14:24:05 crc kubenswrapper[4857]: I0318 14:24:05.183878 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:24:05 crc kubenswrapper[4857]: I0318 14:24:05.184797 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:24:05 crc kubenswrapper[4857]: I0318 14:24:05.192044 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:24:05 crc kubenswrapper[4857]: I0318 14:24:05.355012 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 14:24:05 crc kubenswrapper[4857]: I0318 14:24:05.423321 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:24:08 crc kubenswrapper[4857]: W0318 14:24:08.034017 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75dc7be5_1a0a_4b0b_a33a_1a2a852ccde0.slice/crio-32b624e68361c404ab35b767ccff47b49052905773cbdd09bfd39d88008ac1ec WatchSource:0}: Error finding container 32b624e68361c404ab35b767ccff47b49052905773cbdd09bfd39d88008ac1ec: Status 404 returned error can't find the container with id 32b624e68361c404ab35b767ccff47b49052905773cbdd09bfd39d88008ac1ec Mar 18 14:24:08 crc kubenswrapper[4857]: W0318 14:24:08.036171 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod583a3a2f_591c_4cb4_96d7_3f1ad08441a8.slice/crio-c336161c7027076a7711b4a4421c32744314b88e36b0200e42b4b5383fc29e6c WatchSource:0}: Error finding container c336161c7027076a7711b4a4421c32744314b88e36b0200e42b4b5383fc29e6c: Status 404 returned error can't find the container with id c336161c7027076a7711b4a4421c32744314b88e36b0200e42b4b5383fc29e6c Mar 18 14:24:08 crc kubenswrapper[4857]: W0318 14:24:08.048333 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb34e902_9484_4d17_97ab_77985e7714e4.slice/crio-234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62 WatchSource:0}: Error finding container 234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62: Status 404 returned error can't find the container with id 234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62 Mar 18 14:24:08 crc kubenswrapper[4857]: I0318 14:24:08.379812 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0","Type":"ContainerStarted","Data":"32b624e68361c404ab35b767ccff47b49052905773cbdd09bfd39d88008ac1ec"} Mar 18 14:24:08 crc kubenswrapper[4857]: I0318 14:24:08.381352 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" event={"ID":"eb34e902-9484-4d17-97ab-77985e7714e4","Type":"ContainerStarted","Data":"234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62"} Mar 18 14:24:08 crc kubenswrapper[4857]: I0318 14:24:08.382986 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7z7fh" event={"ID":"583a3a2f-591c-4cb4-96d7-3f1ad08441a8","Type":"ContainerStarted","Data":"c336161c7027076a7711b4a4421c32744314b88e36b0200e42b4b5383fc29e6c"} Mar 18 14:24:13 crc kubenswrapper[4857]: E0318 14:24:13.345159 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 18 14:24:13 crc kubenswrapper[4857]: E0318 14:24:13.345877 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx5z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(83d0525c-c26a-4aae-ac6c-40c625cf5d37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:13 crc kubenswrapper[4857]: E0318 14:24:13.347210 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" Mar 18 14:24:13 crc kubenswrapper[4857]: E0318 14:24:13.550566 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.302463 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.303109 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpz2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(865ce56e-0936-4018-9dd8-17343c925b91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.304408 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="865ce56e-0936-4018-9dd8-17343c925b91" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.306114 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.306734 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f269c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(062e357c-5b17-403b-add2-71ce46b3423a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.308088 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="062e357c-5b17-403b-add2-71ce46b3423a" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.586936 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="865ce56e-0936-4018-9dd8-17343c925b91" Mar 18 14:24:16 crc kubenswrapper[4857]: E0318 14:24:16.587668 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="062e357c-5b17-403b-add2-71ce46b3423a" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.209097 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qs7p9"] Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.211083 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.213760 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.225494 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qs7p9"] Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.311454 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-combined-ca-bundle\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.311836 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovn-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.311955 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb755c3a-d583-40d1-a67d-1af716edbadb-config\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.312068 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.312194 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm94s\" (UniqueName: \"kubernetes.io/projected/fb755c3a-d583-40d1-a67d-1af716edbadb-kube-api-access-jm94s\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.312407 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovs-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.373228 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.400488 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.402477 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.407077 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.423808 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovs-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.423807 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovs-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.423931 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.423961 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.423982 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-combined-ca-bundle\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424033 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vnx\" (UniqueName: \"kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424098 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovn-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424123 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424145 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb755c3a-d583-40d1-a67d-1af716edbadb-config\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424177 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424217 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm94s\" (UniqueName: \"kubernetes.io/projected/fb755c3a-d583-40d1-a67d-1af716edbadb-kube-api-access-jm94s\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.424651 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb755c3a-d583-40d1-a67d-1af716edbadb-ovn-rundir\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.425482 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb755c3a-d583-40d1-a67d-1af716edbadb-config\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.431024 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.443699 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-combined-ca-bundle\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.447382 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb755c3a-d583-40d1-a67d-1af716edbadb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.448227 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm94s\" (UniqueName: \"kubernetes.io/projected/fb755c3a-d583-40d1-a67d-1af716edbadb-kube-api-access-jm94s\") pod \"ovn-controller-metrics-qs7p9\" (UID: \"fb755c3a-d583-40d1-a67d-1af716edbadb\") " pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.526409 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.526603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.526633 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.526671 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vnx\" (UniqueName: \"kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.527976 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.528147 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.530005 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.549284 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vnx\" (UniqueName: \"kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx\") pod \"dnsmasq-dns-7fd796d7df-9wtzl\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.565978 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qs7p9" Mar 18 14:24:20 crc kubenswrapper[4857]: I0318 14:24:20.733107 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:28 crc kubenswrapper[4857]: E0318 14:24:28.672463 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 18 14:24:28 crc kubenswrapper[4857]: E0318 14:24:28.673105 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr8h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-qbw5m_openstack(774a2c87-55ef-4bf7-a34a-ed282578d470): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:28 crc kubenswrapper[4857]: E0318 14:24:28.675092 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" podUID="774a2c87-55ef-4bf7-a34a-ed282578d470" Mar 18 14:24:30 crc kubenswrapper[4857]: I0318 14:24:30.503344 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6784499cd7-5vqcz" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerName="console" containerID="cri-o://271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd" gracePeriod=15 Mar 18 14:24:30 crc kubenswrapper[4857]: I0318 14:24:30.627152 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 14:24:30 crc kubenswrapper[4857]: E0318 14:24:30.720309 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528a3d75_0557_4ac8_bf75_36590c9929a0.slice/crio-271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528a3d75_0557_4ac8_bf75_36590c9929a0.slice/crio-conmon-271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:24:30 crc kubenswrapper[4857]: I0318 14:24:30.980533 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6784499cd7-5vqcz_528a3d75-0557-4ac8-bf75-36590c9929a0/console/0.log" Mar 18 14:24:30 crc kubenswrapper[4857]: I0318 14:24:30.982014 4857 generic.go:334] "Generic (PLEG): container finished" podID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerID="271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd" exitCode=2 Mar 18 14:24:30 crc kubenswrapper[4857]: I0318 14:24:30.982100 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784499cd7-5vqcz" event={"ID":"528a3d75-0557-4ac8-bf75-36590c9929a0","Type":"ContainerDied","Data":"271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd"} Mar 18 14:24:32 crc kubenswrapper[4857]: I0318 14:24:32.348128 4857 patch_prober.go:28] interesting pod/console-6784499cd7-5vqcz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.94:8443/health\": dial tcp 10.217.0.94:8443: connect: connection refused" start-of-body= Mar 18 14:24:32 crc kubenswrapper[4857]: I0318 14:24:32.348179 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6784499cd7-5vqcz" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.94:8443/health\": dial tcp 10.217.0.94:8443: connect: connection refused" Mar 18 14:24:34 crc kubenswrapper[4857]: I0318 14:24:34.528722 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.099271 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.100292 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59bhfhcfh5d6h78h68bh677h65bhc9h654hc7hf5h5b9hbdh9h76h58ch674h548hb4hd7h95hc8h66bh685h54ch5f7h667h66h698h644hcq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6d7zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.103676 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.344047 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.344746 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n94h56h579h7chf4h5fdh686h655h648h645h8bhd4h565h65ch585h648h57hc8hc8h64h5ch56fh5bfh5bch65h55dhfh65dh5d4h657h596h694q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2zcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-jvjlg_openstack(635e665d-2bdc-4e46-913d-0362aa4d4e3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.346093 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-jvjlg" podUID="635e665d-2bdc-4e46-913d-0362aa4d4e3d" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.447114 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.447323 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6gfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6jms7_openstack(68295375-a954-4071-8855-989fac62c318): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.449108 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" podUID="68295375-a954-4071-8855-989fac62c318" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.464444 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.464650 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl5t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-9h9ps_openstack(9e8855cb-b484-488c-8a84-1d3962dc297f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.466742 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" podUID="9e8855cb-b484-488c-8a84-1d3962dc297f" Mar 18 14:24:39 crc kubenswrapper[4857]: W0318 14:24:39.475552 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82585f8a_7069_47cb_b10e_2c83903ddc08.slice/crio-84a0f330457cd764b198c3184e1e20fb45893073b89da5f2e593257379ccf8f9 WatchSource:0}: Error finding container 84a0f330457cd764b198c3184e1e20fb45893073b89da5f2e593257379ccf8f9: Status 404 returned error can't find the container with id 84a0f330457cd764b198c3184e1e20fb45893073b89da5f2e593257379ccf8f9 Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.475683 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.475948 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f46ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-xstkt_openstack(5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.477199 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" podUID="5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.592941 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.753262 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr8h9\" (UniqueName: \"kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9\") pod \"774a2c87-55ef-4bf7-a34a-ed282578d470\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.754952 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config\") pod \"774a2c87-55ef-4bf7-a34a-ed282578d470\" (UID: \"774a2c87-55ef-4bf7-a34a-ed282578d470\") " Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.755789 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config" (OuterVolumeSpecName: "config") pod "774a2c87-55ef-4bf7-a34a-ed282578d470" (UID: "774a2c87-55ef-4bf7-a34a-ed282578d470"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.756073 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9" (OuterVolumeSpecName: "kube-api-access-pr8h9") pod "774a2c87-55ef-4bf7-a34a-ed282578d470" (UID: "774a2c87-55ef-4bf7-a34a-ed282578d470"). InnerVolumeSpecName "kube-api-access-pr8h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.756395 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/774a2c87-55ef-4bf7-a34a-ed282578d470-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.756428 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr8h9\" (UniqueName: \"kubernetes.io/projected/774a2c87-55ef-4bf7-a34a-ed282578d470-kube-api-access-pr8h9\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.862140 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82585f8a-7069-47cb-b10e-2c83903ddc08","Type":"ContainerStarted","Data":"84a0f330457cd764b198c3184e1e20fb45893073b89da5f2e593257379ccf8f9"} Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.866067 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9"} Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.870517 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.870996 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-qbw5m" event={"ID":"774a2c87-55ef-4bf7-a34a-ed282578d470","Type":"ContainerDied","Data":"e0a259ff385280928f90e7d0eaa6713027e3235cc7fcb0928914834987660110"} Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.876105 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" podUID="9e8855cb-b484-488c-8a84-1d3962dc297f" Mar 18 14:24:39 crc kubenswrapper[4857]: E0318 14:24:39.876118 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-jvjlg" podUID="635e665d-2bdc-4e46-913d-0362aa4d4e3d" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.979631 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6784499cd7-5vqcz_528a3d75-0557-4ac8-bf75-36590c9929a0/console/0.log" Mar 18 14:24:39 crc kubenswrapper[4857]: I0318 14:24:39.979768 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.609997 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.622628 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-qbw5m"] Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.643383 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qs7p9"] Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966049 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966183 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966242 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966338 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966434 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssn98\" (UniqueName: \"kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966469 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.966554 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle\") pod \"528a3d75-0557-4ac8-bf75-36590c9929a0\" (UID: \"528a3d75-0557-4ac8-bf75-36590c9929a0\") " Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.967302 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.967343 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config" (OuterVolumeSpecName: "console-config") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.967569 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca" (OuterVolumeSpecName: "service-ca") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.968926 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.983512 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6784499cd7-5vqcz_528a3d75-0557-4ac8-bf75-36590c9929a0/console/0.log" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.984787 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6784499cd7-5vqcz" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.985314 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6784499cd7-5vqcz" event={"ID":"528a3d75-0557-4ac8-bf75-36590c9929a0","Type":"ContainerDied","Data":"054cd4e75752fc5b4360c9c78026b5da07a868f1294761f9fbe38c58138e1b71"} Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.985354 4857 scope.go:117] "RemoveContainer" containerID="271f07298313b51c61c8ac36561072e042605370ec13443cf10475df2db38fcd" Mar 18 14:24:40 crc kubenswrapper[4857]: I0318 14:24:40.997051 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98" (OuterVolumeSpecName: "kube-api-access-ssn98") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "kube-api-access-ssn98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.008834 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009457 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssn98\" (UniqueName: \"kubernetes.io/projected/528a3d75-0557-4ac8-bf75-36590c9929a0-kube-api-access-ssn98\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009481 4857 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009494 4857 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009507 4857 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009520 4857 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-service-ca\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.009533 4857 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/528a3d75-0557-4ac8-bf75-36590c9929a0-console-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.011983 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "528a3d75-0557-4ac8-bf75-36590c9929a0" (UID: "528a3d75-0557-4ac8-bf75-36590c9929a0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.112593 4857 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/528a3d75-0557-4ac8-bf75-36590c9929a0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.206991 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774a2c87-55ef-4bf7-a34a-ed282578d470" path="/var/lib/kubelet/pods/774a2c87-55ef-4bf7-a34a-ed282578d470/volumes" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.332743 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.347417 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6784499cd7-5vqcz"] Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.425260 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.477920 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.484312 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639219 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46ff\" (UniqueName: \"kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff\") pod \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639454 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config\") pod \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639531 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc\") pod \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\" (UID: \"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639633 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc\") pod \"68295375-a954-4071-8855-989fac62c318\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639680 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6gfx\" (UniqueName: \"kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx\") pod \"68295375-a954-4071-8855-989fac62c318\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.639909 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config\") pod \"68295375-a954-4071-8855-989fac62c318\" (UID: \"68295375-a954-4071-8855-989fac62c318\") " Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.640235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config" (OuterVolumeSpecName: "config") pod "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6" (UID: "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.640292 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6" (UID: "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.640910 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68295375-a954-4071-8855-989fac62c318" (UID: "68295375-a954-4071-8855-989fac62c318"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.641208 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config" (OuterVolumeSpecName: "config") pod "68295375-a954-4071-8855-989fac62c318" (UID: "68295375-a954-4071-8855-989fac62c318"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.641299 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.641324 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.641339 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.646475 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff" (OuterVolumeSpecName: "kube-api-access-f46ff") pod "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6" (UID: "5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6"). InnerVolumeSpecName "kube-api-access-f46ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.651394 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx" (OuterVolumeSpecName: "kube-api-access-m6gfx") pod "68295375-a954-4071-8855-989fac62c318" (UID: "68295375-a954-4071-8855-989fac62c318"). InnerVolumeSpecName "kube-api-access-m6gfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.743584 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68295375-a954-4071-8855-989fac62c318-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.744078 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46ff\" (UniqueName: \"kubernetes.io/projected/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6-kube-api-access-f46ff\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.744171 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6gfx\" (UniqueName: \"kubernetes.io/projected/68295375-a954-4071-8855-989fac62c318-kube-api-access-m6gfx\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:41 crc kubenswrapper[4857]: E0318 14:24:41.992487 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Mar 18 14:24:41 crc kubenswrapper[4857]: E0318 14:24:41.992542 4857 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Mar 18 14:24:41 crc kubenswrapper[4857]: E0318 14:24:41.992701 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vk7bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(e8b53cfe-8acc-431c-be7e-b6d48ce587a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 18 14:24:41 crc kubenswrapper[4857]: E0318 14:24:41.994067 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.994126 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" event={"ID":"5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6","Type":"ContainerDied","Data":"ee4098b1891ecc3a63e8bf64891628efe3ecf87313e18d476a65d0baab96dd40"} Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.994176 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xstkt" Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.999576 4857 generic.go:334] "Generic (PLEG): container finished" podID="eb34e902-9484-4d17-97ab-77985e7714e4" containerID="9a78288c3549cdc08f8a178272dbbee32a95b8143037465ec0e2ea7ba5a20084" exitCode=0 Mar 18 14:24:41 crc kubenswrapper[4857]: I0318 14:24:41.999626 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" event={"ID":"eb34e902-9484-4d17-97ab-77985e7714e4","Type":"ContainerDied","Data":"9a78288c3549cdc08f8a178272dbbee32a95b8143037465ec0e2ea7ba5a20084"} Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.003874 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" event={"ID":"68295375-a954-4071-8855-989fac62c318","Type":"ContainerDied","Data":"e5a6c1ec603a73422a5c45bbaff0796b991301d1a48ee358b103a49c737388cf"} Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.004001 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6jms7" Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.009117 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qs7p9" event={"ID":"fb755c3a-d583-40d1-a67d-1af716edbadb","Type":"ContainerStarted","Data":"0d222f9f0fbcc01f50dfc936e4b6542db16897bf1eed29369a440b975b4bf89f"} Mar 18 14:24:42 crc kubenswrapper[4857]: W0318 14:24:42.021175 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode85a4a78_b668_4913_969d_03ee773c74f9.slice/crio-0d20c08e73b2e69e099334088833143bbfa2d1641ab42338d3ff5f687d25c173 WatchSource:0}: Error finding container 0d20c08e73b2e69e099334088833143bbfa2d1641ab42338d3ff5f687d25c173: Status 404 returned error can't find the container with id 0d20c08e73b2e69e099334088833143bbfa2d1641ab42338d3ff5f687d25c173 Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.196843 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.207231 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6jms7"] Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.231363 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:24:42 crc kubenswrapper[4857]: I0318 14:24:42.231443 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xstkt"] Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.027034 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bf21e858-d9d3-448f-bc36-522cf6f7dc2d","Type":"ContainerStarted","Data":"c31da843b1418961d7019de4adeb2ee1029ae68be1feb2e9150ed1731cc705f9"} Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.027727 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.029968 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" event={"ID":"6e52a810-35c4-49bb-a0f6-83accdb52311","Type":"ContainerStarted","Data":"9297f774d3befcf4a10e25e8e6a0033819d8a899e8ad84f291937c2a631db8b5"} Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.032161 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" event={"ID":"e85a4a78-b668-4913-969d-03ee773c74f9","Type":"ContainerStarted","Data":"0d20c08e73b2e69e099334088833143bbfa2d1641ab42338d3ff5f687d25c173"} Mar 18 14:24:43 crc kubenswrapper[4857]: E0318 14:24:43.034401 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.063787 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.045924836 podStartE2EDuration="53.063735143s" podCreationTimestamp="2026-03-18 14:23:50 +0000 UTC" firstStartedPulling="2026-03-18 14:23:53.796880734 +0000 UTC m=+1417.926009191" lastFinishedPulling="2026-03-18 14:24:27.814691031 +0000 UTC m=+1451.943819498" observedRunningTime="2026-03-18 14:24:43.057074525 +0000 UTC m=+1467.186203072" watchObservedRunningTime="2026-03-18 14:24:43.063735143 +0000 UTC m=+1467.192863610" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.129606 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7f87b9b85b-lwdf5" podStartSLOduration=8.544107217 podStartE2EDuration="49.129577046s" podCreationTimestamp="2026-03-18 14:23:54 +0000 UTC" firstStartedPulling="2026-03-18 14:23:58.877807458 +0000 UTC m=+1423.006935915" lastFinishedPulling="2026-03-18 14:24:39.463277257 +0000 UTC m=+1463.592405744" observedRunningTime="2026-03-18 14:24:43.113779469 +0000 UTC m=+1467.242907936" watchObservedRunningTime="2026-03-18 14:24:43.129577046 +0000 UTC m=+1467.258705513" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.181658 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" path="/var/lib/kubelet/pods/528a3d75-0557-4ac8-bf75-36590c9929a0/volumes" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.183425 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6" path="/var/lib/kubelet/pods/5bbbb3bf-8b1f-4751-b5ee-9d4fd5c7d6c6/volumes" Mar 18 14:24:43 crc kubenswrapper[4857]: I0318 14:24:43.183906 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68295375-a954-4071-8855-989fac62c318" path="/var/lib/kubelet/pods/68295375-a954-4071-8855-989fac62c318/volumes" Mar 18 14:24:44 crc kubenswrapper[4857]: I0318 14:24:44.252307 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerStarted","Data":"a9cb7db6e81b9b3d0e4cf4b29f8bfa5b3c02326a9cd686f703453d279af6c7e2"} Mar 18 14:24:44 crc kubenswrapper[4857]: I0318 14:24:44.258039 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerStarted","Data":"7d1427952d362233c9d1826cf66228a45035946c097dd5362c988677f4388a9b"} Mar 18 14:24:45 crc kubenswrapper[4857]: I0318 14:24:45.275045 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerStarted","Data":"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c"} Mar 18 14:24:45 crc kubenswrapper[4857]: I0318 14:24:45.536805 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:45 crc kubenswrapper[4857]: I0318 14:24:45.662142 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5rr6\" (UniqueName: \"kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6\") pod \"eb34e902-9484-4d17-97ab-77985e7714e4\" (UID: \"eb34e902-9484-4d17-97ab-77985e7714e4\") " Mar 18 14:24:45 crc kubenswrapper[4857]: I0318 14:24:45.668334 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6" (OuterVolumeSpecName: "kube-api-access-n5rr6") pod "eb34e902-9484-4d17-97ab-77985e7714e4" (UID: "eb34e902-9484-4d17-97ab-77985e7714e4"). InnerVolumeSpecName "kube-api-access-n5rr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:45 crc kubenswrapper[4857]: I0318 14:24:45.813727 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5rr6\" (UniqueName: \"kubernetes.io/projected/eb34e902-9484-4d17-97ab-77985e7714e4-kube-api-access-n5rr6\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.444666 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" event={"ID":"eb34e902-9484-4d17-97ab-77985e7714e4","Type":"ContainerDied","Data":"234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62"} Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.444713 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564064-jhjx7" Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.444778 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234191eefa6fd088a2355a4ddd46cc316fb195e3b6ac5dc6c9d28c4ce5bf4d62" Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.447527 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerStarted","Data":"79cd86a9b209713cb25346666f89130e527be7e8ed5da09c5cba144a8b85f2c8"} Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.453963 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerStarted","Data":"2c7382308832b285f0127b2fb40e1de03d1be2ba2f0549232624b720577301f4"} Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.624083 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564058-vgn9l"] Mar 18 14:24:46 crc kubenswrapper[4857]: I0318 14:24:46.634340 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564058-vgn9l"] Mar 18 14:24:47 crc kubenswrapper[4857]: I0318 14:24:47.187240 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3333def7-bf08-47f6-9e48-06c0f6adb7ef" path="/var/lib/kubelet/pods/3333def7-bf08-47f6-9e48-06c0f6adb7ef/volumes" Mar 18 14:24:47 crc kubenswrapper[4857]: I0318 14:24:47.488772 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7z7fh" event={"ID":"583a3a2f-591c-4cb4-96d7-3f1ad08441a8","Type":"ContainerStarted","Data":"830ea885b9df47b8baeed0498d0a0a7c319ca2dd6d43824c99e34863a34b0fdb"} Mar 18 14:24:48 crc kubenswrapper[4857]: I0318 14:24:48.947605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82585f8a-7069-47cb-b10e-2c83903ddc08","Type":"ContainerStarted","Data":"fe4d2f2872ad27cb09ded9632f038b93e79510f8f2884bb54565d7afd53a410c"} Mar 18 14:24:49 crc kubenswrapper[4857]: E0318 14:24:49.802769 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.394928 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerStarted","Data":"f938c7ba217900403aaae4bef2fa16d3971dcaa20a53f6ecbd6cce1225c680a7"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.400326 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0","Type":"ContainerStarted","Data":"108bc85c156064659dcf19e7827c856c6619d75dc1634d3ed17ae05818674d3e"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.412441 4857 generic.go:334] "Generic (PLEG): container finished" podID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerID="a9cb7db6e81b9b3d0e4cf4b29f8bfa5b3c02326a9cd686f703453d279af6c7e2" exitCode=0 Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.412603 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerDied","Data":"a9cb7db6e81b9b3d0e4cf4b29f8bfa5b3c02326a9cd686f703453d279af6c7e2"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.417792 4857 generic.go:334] "Generic (PLEG): container finished" podID="e85a4a78-b668-4913-969d-03ee773c74f9" containerID="4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89" exitCode=0 Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.417863 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" event={"ID":"e85a4a78-b668-4913-969d-03ee773c74f9","Type":"ContainerDied","Data":"4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.425116 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"82585f8a-7069-47cb-b10e-2c83903ddc08","Type":"ContainerStarted","Data":"37b547f15e7824b173078356e931a2927bfa04a1044061402ef170b0c25a8fed"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.430426 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerStarted","Data":"271778425daaf4fd5103cf0e854ebbdd9d1759a853d19656e12ae26244a5f2f6"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.432175 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qs7p9" event={"ID":"fb755c3a-d583-40d1-a67d-1af716edbadb","Type":"ContainerStarted","Data":"15bdfe4be44566b98eb4045006d5e6c26aa1f2d40350f67efe37dcc0bd3f6ae7"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.434095 4857 generic.go:334] "Generic (PLEG): container finished" podID="583a3a2f-591c-4cb4-96d7-3f1ad08441a8" containerID="830ea885b9df47b8baeed0498d0a0a7c319ca2dd6d43824c99e34863a34b0fdb" exitCode=0 Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.434132 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7z7fh" event={"ID":"583a3a2f-591c-4cb4-96d7-3f1ad08441a8","Type":"ContainerDied","Data":"830ea885b9df47b8baeed0498d0a0a7c319ca2dd6d43824c99e34863a34b0fdb"} Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.542942 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=44.338691995 podStartE2EDuration="51.542907028s" podCreationTimestamp="2026-03-18 14:23:59 +0000 UTC" firstStartedPulling="2026-03-18 14:24:39.477797292 +0000 UTC m=+1463.606925759" lastFinishedPulling="2026-03-18 14:24:46.682012345 +0000 UTC m=+1470.811140792" observedRunningTime="2026-03-18 14:24:50.518363372 +0000 UTC m=+1474.647491829" watchObservedRunningTime="2026-03-18 14:24:50.542907028 +0000 UTC m=+1474.672035485" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.615329 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qs7p9" podStartSLOduration=24.547070261000002 podStartE2EDuration="30.615309866s" podCreationTimestamp="2026-03-18 14:24:20 +0000 UTC" firstStartedPulling="2026-03-18 14:24:41.0661488 +0000 UTC m=+1465.195277257" lastFinishedPulling="2026-03-18 14:24:47.134388405 +0000 UTC m=+1471.263516862" observedRunningTime="2026-03-18 14:24:50.598110304 +0000 UTC m=+1474.727238761" watchObservedRunningTime="2026-03-18 14:24:50.615309866 +0000 UTC m=+1474.744438313" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.916113 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.945947 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.967693 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:24:50 crc kubenswrapper[4857]: E0318 14:24:50.968164 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerName="console" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.968187 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerName="console" Mar 18 14:24:50 crc kubenswrapper[4857]: E0318 14:24:50.968209 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb34e902-9484-4d17-97ab-77985e7714e4" containerName="oc" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.968217 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb34e902-9484-4d17-97ab-77985e7714e4" containerName="oc" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.968418 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb34e902-9484-4d17-97ab-77985e7714e4" containerName="oc" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.968451 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="528a3d75-0557-4ac8-bf75-36590c9929a0" containerName="console" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.975555 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.979397 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 18 14:24:50 crc kubenswrapper[4857]: I0318 14:24:50.986530 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.007901 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxzm\" (UniqueName: \"kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.008215 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.008357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.008383 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.008415 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.112236 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.112306 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.112350 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.112652 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpxzm\" (UniqueName: \"kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.112692 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.113656 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.115254 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.115884 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.116117 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.134710 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpxzm\" (UniqueName: \"kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm\") pod \"dnsmasq-dns-86db49b7ff-jfmsr\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.312208 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.323671 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.326808 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc\") pod \"9e8855cb-b484-488c-8a84-1d3962dc297f\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.327035 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config\") pod \"9e8855cb-b484-488c-8a84-1d3962dc297f\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.327130 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl5t9\" (UniqueName: \"kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9\") pod \"9e8855cb-b484-488c-8a84-1d3962dc297f\" (UID: \"9e8855cb-b484-488c-8a84-1d3962dc297f\") " Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.329227 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e8855cb-b484-488c-8a84-1d3962dc297f" (UID: "9e8855cb-b484-488c-8a84-1d3962dc297f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.329628 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config" (OuterVolumeSpecName: "config") pod "9e8855cb-b484-488c-8a84-1d3962dc297f" (UID: "9e8855cb-b484-488c-8a84-1d3962dc297f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.337354 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9" (OuterVolumeSpecName: "kube-api-access-vl5t9") pod "9e8855cb-b484-488c-8a84-1d3962dc297f" (UID: "9e8855cb-b484-488c-8a84-1d3962dc297f"). InnerVolumeSpecName "kube-api-access-vl5t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.379917 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.430838 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.430878 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl5t9\" (UniqueName: \"kubernetes.io/projected/9e8855cb-b484-488c-8a84-1d3962dc297f-kube-api-access-vl5t9\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.430893 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e8855cb-b484-488c-8a84-1d3962dc297f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.482705 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" event={"ID":"e85a4a78-b668-4913-969d-03ee773c74f9","Type":"ContainerStarted","Data":"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.489721 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.504110 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7z7fh" event={"ID":"583a3a2f-591c-4cb4-96d7-3f1ad08441a8","Type":"ContainerStarted","Data":"ec5b2e640c6e2e83dc1e55b86db1ad9549a31242faf4f6ee778e3294330c4703"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.504473 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7z7fh" event={"ID":"583a3a2f-591c-4cb4-96d7-3f1ad08441a8","Type":"ContainerStarted","Data":"6f01056271b2e307f7aac1ecb051a069da54e819fd1a356d9e805bbd3b59e63a"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.504921 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.504989 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.529142 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0","Type":"ContainerStarted","Data":"47fd108642848d464362cad5ad7d05d0c8dae4bcad9ce6a25a3e5fafe53505b1"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.535559 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" event={"ID":"9e8855cb-b484-488c-8a84-1d3962dc297f","Type":"ContainerDied","Data":"cb5dafc1fec49d0d70f585a194d657e62a88a6872794572622811efb306f7c80"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.535775 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-9h9ps" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.567472 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" podStartSLOduration=26.445879294 podStartE2EDuration="31.567442011s" podCreationTimestamp="2026-03-18 14:24:20 +0000 UTC" firstStartedPulling="2026-03-18 14:24:42.026199953 +0000 UTC m=+1466.155328410" lastFinishedPulling="2026-03-18 14:24:47.14776267 +0000 UTC m=+1471.276891127" observedRunningTime="2026-03-18 14:24:51.534544565 +0000 UTC m=+1475.663673012" watchObservedRunningTime="2026-03-18 14:24:51.567442011 +0000 UTC m=+1475.696570458" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.581539 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7z7fh" podStartSLOduration=23.881925449 podStartE2EDuration="55.581515674s" podCreationTimestamp="2026-03-18 14:23:56 +0000 UTC" firstStartedPulling="2026-03-18 14:24:08.039713362 +0000 UTC m=+1432.168841809" lastFinishedPulling="2026-03-18 14:24:39.739303577 +0000 UTC m=+1463.868432034" observedRunningTime="2026-03-18 14:24:51.573540344 +0000 UTC m=+1475.702668811" watchObservedRunningTime="2026-03-18 14:24:51.581515674 +0000 UTC m=+1475.710644131" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.583813 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerStarted","Data":"6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af"} Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.645487 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.606578084 podStartE2EDuration="55.64546396s" podCreationTimestamp="2026-03-18 14:23:56 +0000 UTC" firstStartedPulling="2026-03-18 14:24:08.038676916 +0000 UTC m=+1432.167805373" lastFinishedPulling="2026-03-18 14:24:51.077562792 +0000 UTC m=+1475.206691249" observedRunningTime="2026-03-18 14:24:51.612459561 +0000 UTC m=+1475.741588018" watchObservedRunningTime="2026-03-18 14:24:51.64546396 +0000 UTC m=+1475.774592417" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.648459 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=29.75316136 podStartE2EDuration="1m2.648446805s" podCreationTimestamp="2026-03-18 14:23:49 +0000 UTC" firstStartedPulling="2026-03-18 14:23:53.795698724 +0000 UTC m=+1417.924827181" lastFinishedPulling="2026-03-18 14:24:26.690984169 +0000 UTC m=+1450.820112626" observedRunningTime="2026-03-18 14:24:51.638491035 +0000 UTC m=+1475.767619492" watchObservedRunningTime="2026-03-18 14:24:51.648446805 +0000 UTC m=+1475.777575262" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.686910 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.695923 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-9h9ps"] Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.895708 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:24:51 crc kubenswrapper[4857]: W0318 14:24:51.903436 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63c34307_8027_4a3d_a786_1576b61224a0.slice/crio-1585d7e9bc873783c088bc00d1ac584b3b8f3d1e67cbad841ad325bc8d9500ca WatchSource:0}: Error finding container 1585d7e9bc873783c088bc00d1ac584b3b8f3d1e67cbad841ad325bc8d9500ca: Status 404 returned error can't find the container with id 1585d7e9bc873783c088bc00d1ac584b3b8f3d1e67cbad841ad325bc8d9500ca Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.942510 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:51 crc kubenswrapper[4857]: I0318 14:24:51.998027 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.525026 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.596715 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jvjlg" event={"ID":"635e665d-2bdc-4e46-913d-0362aa4d4e3d","Type":"ContainerStarted","Data":"2fae714efd5adb24dd278329dade7d49e20c2affd1be7706c2e1f823227ef71e"} Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.597024 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jvjlg" Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.600179 4857 generic.go:334] "Generic (PLEG): container finished" podID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerID="79cd86a9b209713cb25346666f89130e527be7e8ed5da09c5cba144a8b85f2c8" exitCode=0 Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.600296 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerDied","Data":"79cd86a9b209713cb25346666f89130e527be7e8ed5da09c5cba144a8b85f2c8"} Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.603568 4857 generic.go:334] "Generic (PLEG): container finished" podID="63c34307-8027-4a3d-a786-1576b61224a0" containerID="dcb9e1b2d8b3ae6847aed4ccc8cd01c2509e8486ccb9667f74214fbfa090cd22" exitCode=0 Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.603632 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" event={"ID":"63c34307-8027-4a3d-a786-1576b61224a0","Type":"ContainerDied","Data":"dcb9e1b2d8b3ae6847aed4ccc8cd01c2509e8486ccb9667f74214fbfa090cd22"} Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.603836 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" event={"ID":"63c34307-8027-4a3d-a786-1576b61224a0","Type":"ContainerStarted","Data":"1585d7e9bc873783c088bc00d1ac584b3b8f3d1e67cbad841ad325bc8d9500ca"} Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.635707 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jvjlg" podStartSLOduration=4.5881128570000005 podStartE2EDuration="56.63567148s" podCreationTimestamp="2026-03-18 14:23:56 +0000 UTC" firstStartedPulling="2026-03-18 14:23:59.822596238 +0000 UTC m=+1423.951724695" lastFinishedPulling="2026-03-18 14:24:51.870154861 +0000 UTC m=+1475.999283318" observedRunningTime="2026-03-18 14:24:52.625231468 +0000 UTC m=+1476.754359985" watchObservedRunningTime="2026-03-18 14:24:52.63567148 +0000 UTC m=+1476.764799947" Mar 18 14:24:52 crc kubenswrapper[4857]: I0318 14:24:52.678330 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.174502 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8855cb-b484-488c-8a84-1d3962dc297f" path="/var/lib/kubelet/pods/9e8855cb-b484-488c-8a84-1d3962dc297f/volumes" Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.617343 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerStarted","Data":"dafc1fcd5799591aa908ce0bf0bc189cc3f522c9960cc3e0575755e1b1b634e6"} Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.623363 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" event={"ID":"63c34307-8027-4a3d-a786-1576b61224a0","Type":"ContainerStarted","Data":"f418139ebf42b58b0a821505302948b81b23498bb279e8e4e04a026b604e0dea"} Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.623408 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.648422 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.880234324 podStartE2EDuration="1m5.648397966s" podCreationTimestamp="2026-03-18 14:23:48 +0000 UTC" firstStartedPulling="2026-03-18 14:23:51.68772997 +0000 UTC m=+1415.816858427" lastFinishedPulling="2026-03-18 14:24:39.455893612 +0000 UTC m=+1463.585022069" observedRunningTime="2026-03-18 14:24:53.645396381 +0000 UTC m=+1477.774524858" watchObservedRunningTime="2026-03-18 14:24:53.648397966 +0000 UTC m=+1477.777526423" Mar 18 14:24:53 crc kubenswrapper[4857]: I0318 14:24:53.672864 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" podStartSLOduration=3.67284471 podStartE2EDuration="3.67284471s" podCreationTimestamp="2026-03-18 14:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:24:53.670381168 +0000 UTC m=+1477.799509635" watchObservedRunningTime="2026-03-18 14:24:53.67284471 +0000 UTC m=+1477.801973167" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.525935 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.575677 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.652153 4857 generic.go:334] "Generic (PLEG): container finished" podID="a61234af-d85a-4afc-ad53-ed997001f645" containerID="2c7382308832b285f0127b2fb40e1de03d1be2ba2f0549232624b720577301f4" exitCode=0 Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.652890 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerDied","Data":"2c7382308832b285f0127b2fb40e1de03d1be2ba2f0549232624b720577301f4"} Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.784994 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.785344 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="dnsmasq-dns" containerID="cri-o://3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501" gracePeriod=10 Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.813835 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.816242 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.820329 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.936942 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zww2h\" (UniqueName: \"kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.937042 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.937124 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.937181 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:54 crc kubenswrapper[4857]: I0318 14:24:54.937200 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.049900 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zww2h\" (UniqueName: \"kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.050074 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.050219 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.050321 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.050376 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.051364 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.051792 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.051862 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.052282 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.085209 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zww2h\" (UniqueName: \"kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h\") pod \"dnsmasq-dns-698758b865-7fcl8\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.144667 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.389373 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.467026 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config\") pod \"e85a4a78-b668-4913-969d-03ee773c74f9\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.467127 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc\") pod \"e85a4a78-b668-4913-969d-03ee773c74f9\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.467236 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vnx\" (UniqueName: \"kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx\") pod \"e85a4a78-b668-4913-969d-03ee773c74f9\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.467295 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb\") pod \"e85a4a78-b668-4913-969d-03ee773c74f9\" (UID: \"e85a4a78-b668-4913-969d-03ee773c74f9\") " Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.473990 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx" (OuterVolumeSpecName: "kube-api-access-t4vnx") pod "e85a4a78-b668-4913-969d-03ee773c74f9" (UID: "e85a4a78-b668-4913-969d-03ee773c74f9"). InnerVolumeSpecName "kube-api-access-t4vnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.535360 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e85a4a78-b668-4913-969d-03ee773c74f9" (UID: "e85a4a78-b668-4913-969d-03ee773c74f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.554781 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config" (OuterVolumeSpecName: "config") pod "e85a4a78-b668-4913-969d-03ee773c74f9" (UID: "e85a4a78-b668-4913-969d-03ee773c74f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.567978 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e85a4a78-b668-4913-969d-03ee773c74f9" (UID: "e85a4a78-b668-4913-969d-03ee773c74f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.570161 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.570194 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.570205 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vnx\" (UniqueName: \"kubernetes.io/projected/e85a4a78-b668-4913-969d-03ee773c74f9-kube-api-access-t4vnx\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.570220 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e85a4a78-b668-4913-969d-03ee773c74f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.664164 4857 generic.go:334] "Generic (PLEG): container finished" podID="e85a4a78-b668-4913-969d-03ee773c74f9" containerID="3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501" exitCode=0 Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.664231 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" event={"ID":"e85a4a78-b668-4913-969d-03ee773c74f9","Type":"ContainerDied","Data":"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501"} Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.664260 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" event={"ID":"e85a4a78-b668-4913-969d-03ee773c74f9","Type":"ContainerDied","Data":"0d20c08e73b2e69e099334088833143bbfa2d1641ab42338d3ff5f687d25c173"} Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.664293 4857 scope.go:117] "RemoveContainer" containerID="3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.664417 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-9wtzl" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.669551 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8b53cfe-8acc-431c-be7e-b6d48ce587a8","Type":"ContainerStarted","Data":"8829a66b8f82391a5de78501b48d419be4736a59d5607024bbb3678f5ab6ae0b"} Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.670887 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.692057 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.9854516220000002 podStartE2EDuration="1m2.692035316s" podCreationTimestamp="2026-03-18 14:23:53 +0000 UTC" firstStartedPulling="2026-03-18 14:23:54.827289684 +0000 UTC m=+1418.956418141" lastFinishedPulling="2026-03-18 14:24:54.533873378 +0000 UTC m=+1478.663001835" observedRunningTime="2026-03-18 14:24:55.689388079 +0000 UTC m=+1479.818516536" watchObservedRunningTime="2026-03-18 14:24:55.692035316 +0000 UTC m=+1479.821163773" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.694849 4857 scope.go:117] "RemoveContainer" containerID="4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.714585 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.719821 4857 scope.go:117] "RemoveContainer" containerID="3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501" Mar 18 14:24:55 crc kubenswrapper[4857]: E0318 14:24:55.720409 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501\": container with ID starting with 3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501 not found: ID does not exist" containerID="3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.720561 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501"} err="failed to get container status \"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501\": rpc error: code = NotFound desc = could not find container \"3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501\": container with ID starting with 3aa9a4d379f022002e359784f8100dcae17fc5f440c2ab2d6c2b310abe9b6501 not found: ID does not exist" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.720600 4857 scope.go:117] "RemoveContainer" containerID="4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89" Mar 18 14:24:55 crc kubenswrapper[4857]: E0318 14:24:55.721017 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89\": container with ID starting with 4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89 not found: ID does not exist" containerID="4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.721041 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89"} err="failed to get container status \"4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89\": rpc error: code = NotFound desc = could not find container \"4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89\": container with ID starting with 4eba927bab19103f66b4838c4d46f6d5ebbe8e54e40564799117e000f2edef89 not found: ID does not exist" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.725160 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.736880 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-9wtzl"] Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.909811 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 18 14:24:55 crc kubenswrapper[4857]: E0318 14:24:55.911268 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="dnsmasq-dns" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.911299 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="dnsmasq-dns" Mar 18 14:24:55 crc kubenswrapper[4857]: E0318 14:24:55.911385 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="init" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.911398 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="init" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.912094 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" containerName="dnsmasq-dns" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.921091 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.923318 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.923492 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.923544 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.925783 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-k9lxb" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.928295 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980403 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-lock\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980549 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx2w9\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-kube-api-access-fx2w9\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980635 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-cache\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980675 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980933 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca61c04-f56b-42c4-99fe-daa7f80436f7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:55 crc kubenswrapper[4857]: I0318 14:24:55.980981 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083106 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-lock\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083193 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx2w9\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-kube-api-access-fx2w9\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083233 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-cache\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083252 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083370 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca61c04-f56b-42c4-99fe-daa7f80436f7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083387 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.083565 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.083589 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.083662 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:24:56.583644217 +0000 UTC m=+1480.712772674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.083809 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-lock\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.084311 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1ca61c04-f56b-42c4-99fe-daa7f80436f7-cache\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.088500 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ca61c04-f56b-42c4-99fe-daa7f80436f7-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.091430 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.091517 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9c7097f2f60f6ff57f32555e9df167750d1a286b3e0d500665acc830a2a7d47f/globalmount\"" pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.103436 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx2w9\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-kube-api-access-fx2w9\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.142481 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94c4d4f3-2ffc-4796-a63c-245bf55b7295\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.466429 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qmp52"] Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.468180 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.471146 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.471637 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.472147 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.483486 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qmp52"] Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611288 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c442r\" (UniqueName: \"kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611394 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611526 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611632 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611671 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611721 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.611765 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.612080 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.612118 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: E0318 14:24:56.612177 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:24:57.612146556 +0000 UTC m=+1481.741275013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.691502 4857 generic.go:334] "Generic (PLEG): container finished" podID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerID="da2da7a35bc2b162530e25c975294e402b90f87bc143db75802bb6a98fae381f" exitCode=0 Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.692848 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7fcl8" event={"ID":"031b5441-9d41-406b-aea4-47ea37b74a2a","Type":"ContainerDied","Data":"da2da7a35bc2b162530e25c975294e402b90f87bc143db75802bb6a98fae381f"} Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.692894 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7fcl8" event={"ID":"031b5441-9d41-406b-aea4-47ea37b74a2a","Type":"ContainerStarted","Data":"2092f8ed20eb5e4cc0d680aa1438e7acb5aa8e1ab6d9d0fe27d6bfc02d6f603d"} Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716362 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716429 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716456 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716506 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c442r\" (UniqueName: \"kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716542 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716589 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.716638 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.717119 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.717671 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.725832 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.726763 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.729284 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.734407 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.741476 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c442r\" (UniqueName: \"kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r\") pod \"swift-ring-rebalance-qmp52\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:56 crc kubenswrapper[4857]: I0318 14:24:56.787490 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.188427 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85a4a78-b668-4913-969d-03ee773c74f9" path="/var/lib/kubelet/pods/e85a4a78-b668-4913-969d-03ee773c74f9/volumes" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.356925 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qmp52"] Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.666682 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.672626 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:57 crc kubenswrapper[4857]: E0318 14:24:57.672903 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:24:57 crc kubenswrapper[4857]: E0318 14:24:57.672937 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:24:57 crc kubenswrapper[4857]: E0318 14:24:57.673006 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:24:59.67298269 +0000 UTC m=+1483.802111157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.718187 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7fcl8" event={"ID":"031b5441-9d41-406b-aea4-47ea37b74a2a","Type":"ContainerStarted","Data":"439c148d08328ffe3560e87ffa596cbf8f933046850ff91fb25462b0f59de394"} Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.718277 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.719943 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qmp52" event={"ID":"04d9193e-1a5e-4943-9241-05e854fb24cb","Type":"ContainerStarted","Data":"f37b450986e2e9faf49b6c39d3bfce6bcfbe8ced0039062ebc398521c37e82f5"} Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.765111 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podStartSLOduration=3.765078762 podStartE2EDuration="3.765078762s" podCreationTimestamp="2026-03-18 14:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:24:57.761277287 +0000 UTC m=+1481.890405744" watchObservedRunningTime="2026-03-18 14:24:57.765078762 +0000 UTC m=+1481.894207229" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.895962 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.898335 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.912317 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.912666 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8klng" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.912817 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.912873 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.932553 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.980879 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983073 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-scripts\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983339 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-config\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983458 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sdmq\" (UniqueName: \"kubernetes.io/projected/ceaa02e5-9dc8-4200-a963-075794c1e822-kube-api-access-9sdmq\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983519 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:57 crc kubenswrapper[4857]: I0318 14:24:57.983552 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085550 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sdmq\" (UniqueName: \"kubernetes.io/projected/ceaa02e5-9dc8-4200-a963-075794c1e822-kube-api-access-9sdmq\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085615 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085638 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085693 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085745 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085856 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-scripts\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.085932 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-config\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.086969 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-config\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.088356 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.089066 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ceaa02e5-9dc8-4200-a963-075794c1e822-scripts\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.094219 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.094227 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.101382 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ceaa02e5-9dc8-4200-a963-075794c1e822-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.136055 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sdmq\" (UniqueName: \"kubernetes.io/projected/ceaa02e5-9dc8-4200-a963-075794c1e822-kube-api-access-9sdmq\") pod \"ovn-northd-0\" (UID: \"ceaa02e5-9dc8-4200-a963-075794c1e822\") " pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.232874 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 14:24:58 crc kubenswrapper[4857]: I0318 14:24:58.837479 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 14:24:58 crc kubenswrapper[4857]: W0318 14:24:58.854407 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceaa02e5_9dc8_4200_a963_075794c1e822.slice/crio-28221ae277280e2b0549e5878e0ab9de1d5e2a90aefb52f838b888b3e60f52d7 WatchSource:0}: Error finding container 28221ae277280e2b0549e5878e0ab9de1d5e2a90aefb52f838b888b3e60f52d7: Status 404 returned error can't find the container with id 28221ae277280e2b0549e5878e0ab9de1d5e2a90aefb52f838b888b3e60f52d7 Mar 18 14:24:59 crc kubenswrapper[4857]: I0318 14:24:59.741334 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:24:59 crc kubenswrapper[4857]: E0318 14:24:59.741551 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:24:59 crc kubenswrapper[4857]: E0318 14:24:59.741786 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:24:59 crc kubenswrapper[4857]: E0318 14:24:59.741865 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:25:03.741844901 +0000 UTC m=+1487.870973358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:24:59 crc kubenswrapper[4857]: I0318 14:24:59.755704 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ceaa02e5-9dc8-4200-a963-075794c1e822","Type":"ContainerStarted","Data":"28221ae277280e2b0549e5878e0ab9de1d5e2a90aefb52f838b888b3e60f52d7"} Mar 18 14:25:00 crc kubenswrapper[4857]: I0318 14:25:00.452028 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 18 14:25:00 crc kubenswrapper[4857]: I0318 14:25:00.452098 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 18 14:25:00 crc kubenswrapper[4857]: I0318 14:25:00.542019 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 18 14:25:00 crc kubenswrapper[4857]: I0318 14:25:00.871468 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.312946 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.347606 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.347651 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.448596 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.754042 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5vwhx"] Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.755528 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.800412 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5vwhx"] Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.870927 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2499-account-create-update-j6xhq"] Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.873137 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.881765 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.920243 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2499-account-create-update-j6xhq"] Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.930984 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jdh\" (UniqueName: \"kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.931143 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:01 crc kubenswrapper[4857]: I0318 14:25:01.989395 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.035990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.036049 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb9vr\" (UniqueName: \"kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.036122 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7jdh\" (UniqueName: \"kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.036176 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.037223 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.060278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7jdh\" (UniqueName: \"kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh\") pod \"glance-db-create-5vwhx\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.081432 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.138678 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.138737 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb9vr\" (UniqueName: \"kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.139534 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.165186 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb9vr\" (UniqueName: \"kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr\") pod \"glance-2499-account-create-update-j6xhq\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.205326 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.515128 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9zfmn"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.517606 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.540094 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9zfmn"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.553577 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0a9b-account-create-update-6lftr"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.555625 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.560365 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.566697 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a9b-account-create-update-6lftr"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.659992 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96lf\" (UniqueName: \"kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.660168 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.660332 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpl2\" (UniqueName: \"kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.660377 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.765920 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96lf\" (UniqueName: \"kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.766109 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.766238 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpl2\" (UniqueName: \"kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.766313 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.767170 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.767813 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.797609 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96lf\" (UniqueName: \"kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf\") pod \"keystone-0a9b-account-create-update-6lftr\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.817821 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpl2\" (UniqueName: \"kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2\") pod \"keystone-db-create-9zfmn\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.848394 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.877466 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.935415 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f0bc-account-create-update-4lkqr"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.937358 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.940288 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.953029 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-s5fvr"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.955174 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.959144 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f0bc-account-create-update-4lkqr"] Mar 18 14:25:02 crc kubenswrapper[4857]: I0318 14:25:02.989677 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s5fvr"] Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.077322 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.077477 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.077505 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwwz\" (UniqueName: \"kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.077564 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4rz\" (UniqueName: \"kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.200729 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.201003 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.201053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwwz\" (UniqueName: \"kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.201151 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4rz\" (UniqueName: \"kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.202440 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.202683 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.229180 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4rz\" (UniqueName: \"kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz\") pod \"placement-f0bc-account-create-update-4lkqr\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.231541 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwwz\" (UniqueName: \"kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz\") pod \"placement-db-create-s5fvr\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.289492 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.301047 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.812656 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:25:03 crc kubenswrapper[4857]: E0318 14:25:03.813039 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:25:03 crc kubenswrapper[4857]: E0318 14:25:03.813076 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:25:03 crc kubenswrapper[4857]: E0318 14:25:03.813149 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:25:11.813125817 +0000 UTC m=+1495.942254284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:25:03 crc kubenswrapper[4857]: I0318 14:25:03.815006 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.591526 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8r2f9"] Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.593079 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.619418 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-6779-account-create-update-4dxfv"] Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.621638 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.632376 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.641928 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8r2f9"] Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.684639 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6779-account-create-update-4dxfv"] Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.740489 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.740652 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcr5m\" (UniqueName: \"kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.740691 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6cls\" (UniqueName: \"kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.740726 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.843290 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.843589 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcr5m\" (UniqueName: \"kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.843639 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6cls\" (UniqueName: \"kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.843682 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.844705 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.844803 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.863805 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6cls\" (UniqueName: \"kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls\") pod \"mysqld-exporter-6779-account-create-update-4dxfv\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.870035 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcr5m\" (UniqueName: \"kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m\") pod \"mysqld-exporter-openstack-db-create-8r2f9\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.923812 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:04 crc kubenswrapper[4857]: I0318 14:25:04.972641 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:05 crc kubenswrapper[4857]: I0318 14:25:05.146031 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:25:05 crc kubenswrapper[4857]: I0318 14:25:05.209870 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:25:05 crc kubenswrapper[4857]: I0318 14:25:05.210136 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="dnsmasq-dns" containerID="cri-o://f418139ebf42b58b0a821505302948b81b23498bb279e8e4e04a026b604e0dea" gracePeriod=10 Mar 18 14:25:05 crc kubenswrapper[4857]: I0318 14:25:05.854882 4857 generic.go:334] "Generic (PLEG): container finished" podID="63c34307-8027-4a3d-a786-1576b61224a0" containerID="f418139ebf42b58b0a821505302948b81b23498bb279e8e4e04a026b604e0dea" exitCode=0 Mar 18 14:25:05 crc kubenswrapper[4857]: I0318 14:25:05.854901 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" event={"ID":"63c34307-8027-4a3d-a786-1576b61224a0","Type":"ContainerDied","Data":"f418139ebf42b58b0a821505302948b81b23498bb279e8e4e04a026b604e0dea"} Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.313767 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.751839 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.820227 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb\") pod \"63c34307-8027-4a3d-a786-1576b61224a0\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.820523 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config\") pod \"63c34307-8027-4a3d-a786-1576b61224a0\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.820571 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpxzm\" (UniqueName: \"kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm\") pod \"63c34307-8027-4a3d-a786-1576b61224a0\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.820635 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc\") pod \"63c34307-8027-4a3d-a786-1576b61224a0\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.820676 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb\") pod \"63c34307-8027-4a3d-a786-1576b61224a0\" (UID: \"63c34307-8027-4a3d-a786-1576b61224a0\") " Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.831913 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm" (OuterVolumeSpecName: "kube-api-access-qpxzm") pod "63c34307-8027-4a3d-a786-1576b61224a0" (UID: "63c34307-8027-4a3d-a786-1576b61224a0"). InnerVolumeSpecName "kube-api-access-qpxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.880795 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" event={"ID":"63c34307-8027-4a3d-a786-1576b61224a0","Type":"ContainerDied","Data":"1585d7e9bc873783c088bc00d1ac584b3b8f3d1e67cbad841ad325bc8d9500ca"} Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.880864 4857 scope.go:117] "RemoveContainer" containerID="f418139ebf42b58b0a821505302948b81b23498bb279e8e4e04a026b604e0dea" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.881084 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jfmsr" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.903085 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config" (OuterVolumeSpecName: "config") pod "63c34307-8027-4a3d-a786-1576b61224a0" (UID: "63c34307-8027-4a3d-a786-1576b61224a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.919323 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63c34307-8027-4a3d-a786-1576b61224a0" (UID: "63c34307-8027-4a3d-a786-1576b61224a0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.921523 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63c34307-8027-4a3d-a786-1576b61224a0" (UID: "63c34307-8027-4a3d-a786-1576b61224a0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.924082 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.924111 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpxzm\" (UniqueName: \"kubernetes.io/projected/63c34307-8027-4a3d-a786-1576b61224a0-kube-api-access-qpxzm\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.924127 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.924138 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:06 crc kubenswrapper[4857]: I0318 14:25:06.935741 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63c34307-8027-4a3d-a786-1576b61224a0" (UID: "63c34307-8027-4a3d-a786-1576b61224a0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.026615 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c34307-8027-4a3d-a786-1576b61224a0-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.255453 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.268087 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jfmsr"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.413831 4857 scope.go:117] "RemoveContainer" containerID="dcb9e1b2d8b3ae6847aed4ccc8cd01c2509e8486ccb9667f74214fbfa090cd22" Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.776985 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f0bc-account-create-update-4lkqr"] Mar 18 14:25:07 crc kubenswrapper[4857]: W0318 14:25:07.814990 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f18e55_a740_4fec_9739_82062db6f9d8.slice/crio-71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970 WatchSource:0}: Error finding container 71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970: Status 404 returned error can't find the container with id 71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970 Mar 18 14:25:07 crc kubenswrapper[4857]: W0318 14:25:07.818393 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda515a015_c680_4c7b_bdd6_ce46602b7e30.slice/crio-75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e WatchSource:0}: Error finding container 75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e: Status 404 returned error can't find the container with id 75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.824655 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2499-account-create-update-j6xhq"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.857370 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9zfmn"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.899656 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5vwhx"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.907937 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8r2f9"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.934961 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9zfmn" event={"ID":"781bd548-5b56-4f74-b1a2-2228b7890b3a","Type":"ContainerStarted","Data":"847c68901c00088c0e5d8cf77f4c6f4108e8b2ddfb313be2d14b4a1f4f2fbaca"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.942777 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-s5fvr"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.943103 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" event={"ID":"943237f4-af1c-4d28-a5e1-5dc93d0d2c71","Type":"ContainerStarted","Data":"a7eebca076855c035add0e50db5546af6c278cdb5ce131db476105cda45e19ba"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.951067 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5vwhx" event={"ID":"3a5cc680-f973-4abe-a161-a19ac4036406","Type":"ContainerStarted","Data":"bbd36cd8b7b5cc0d348fcc7110539e0fccaf358be4b4756723d0e1b2e6b6bbdf"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.959033 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2499-account-create-update-j6xhq" event={"ID":"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9","Type":"ContainerStarted","Data":"b37c7ea4ba86a4bd0dc0c9e624ab5238c152b4dd24b2d115ed350ab42fde76df"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.963284 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerStarted","Data":"01ed5722c6df22f4aa39b5d2eb9604db9e7ad9e1bbcbe8a5cef1e369f2c7cb15"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.963942 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0a9b-account-create-update-6lftr"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.971976 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a9b-account-create-update-6lftr" event={"ID":"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530","Type":"ContainerStarted","Data":"b72311fef750b232fe3a85c1575ba351bd35e609eab8f15cab54d2b2b98f4839"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.972150 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-6779-account-create-update-4dxfv"] Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.980583 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f0bc-account-create-update-4lkqr" event={"ID":"a515a015-c680-4c7b-bdd6-ce46602b7e30","Type":"ContainerStarted","Data":"75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.982134 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" event={"ID":"78f18e55-a740-4fec-9739-82062db6f9d8","Type":"ContainerStarted","Data":"71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.983367 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5fvr" event={"ID":"9bded09f-2eca-4e52-b648-a21c151b61b6","Type":"ContainerStarted","Data":"4a84a9c7910cb3c5afa9fb5c0975b58b55cb0ac74685505a0f8751ef4f7d93ec"} Mar 18 14:25:07 crc kubenswrapper[4857]: I0318 14:25:07.984658 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qmp52" event={"ID":"04d9193e-1a5e-4943-9241-05e854fb24cb","Type":"ContainerStarted","Data":"439d2c72bcf758f6b2bbe27f8c3f39ae940747e8e5f0f4c0f28494c071b55662"} Mar 18 14:25:08 crc kubenswrapper[4857]: I0318 14:25:08.008028 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ceaa02e5-9dc8-4200-a963-075794c1e822","Type":"ContainerStarted","Data":"e133f08e0c44908bb2a7e751cd80d62e7fec67dc7a5319a1a10f6523dea8ce6b"} Mar 18 14:25:08 crc kubenswrapper[4857]: I0318 14:25:08.019001 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qmp52" podStartSLOduration=3.023131017 podStartE2EDuration="12.018979922s" podCreationTimestamp="2026-03-18 14:24:56 +0000 UTC" firstStartedPulling="2026-03-18 14:24:57.364091465 +0000 UTC m=+1481.493219922" lastFinishedPulling="2026-03-18 14:25:06.35994036 +0000 UTC m=+1490.489068827" observedRunningTime="2026-03-18 14:25:08.009406672 +0000 UTC m=+1492.138535149" watchObservedRunningTime="2026-03-18 14:25:08.018979922 +0000 UTC m=+1492.148108379" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.042571 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" event={"ID":"943237f4-af1c-4d28-a5e1-5dc93d0d2c71","Type":"ContainerStarted","Data":"bed9b8d54107b7aad8ba44925a95ecb0c45f5be332d2c48fede93d6440e60bea"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.052721 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5fvr" event={"ID":"9bded09f-2eca-4e52-b648-a21c151b61b6","Type":"ContainerStarted","Data":"46026b1d619aa0af9234a73fd654abcb1c8aacb5d3b8d9552503983a86d7a042"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.066514 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2499-account-create-update-j6xhq" event={"ID":"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9","Type":"ContainerStarted","Data":"98f7202f69d620bf3aaade18d3ac96490d85c235823083bf22ab32bc0897ef45"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.079742 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" podStartSLOduration=5.079716964 podStartE2EDuration="5.079716964s" podCreationTimestamp="2026-03-18 14:25:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.078530994 +0000 UTC m=+1493.207659451" watchObservedRunningTime="2026-03-18 14:25:09.079716964 +0000 UTC m=+1493.208845421" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.086295 4857 generic.go:334] "Generic (PLEG): container finished" podID="3a5cc680-f973-4abe-a161-a19ac4036406" containerID="9bdd662d75d86f11d7df2747d349545519e7dbeb059642b58d002b34c79f3f44" exitCode=0 Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.086473 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5vwhx" event={"ID":"3a5cc680-f973-4abe-a161-a19ac4036406","Type":"ContainerDied","Data":"9bdd662d75d86f11d7df2747d349545519e7dbeb059642b58d002b34c79f3f44"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.089501 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a9b-account-create-update-6lftr" event={"ID":"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530","Type":"ContainerStarted","Data":"4b72b30ab7e5a386c534851b7a2854588f5cfba416e11b4035413c27b369c3a0"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.091491 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f0bc-account-create-update-4lkqr" event={"ID":"a515a015-c680-4c7b-bdd6-ce46602b7e30","Type":"ContainerStarted","Data":"dda1b8b37c3c421ff3e0c2536377c26c77f08d76649a1d8a325e4e847d0f1763"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.098160 4857 generic.go:334] "Generic (PLEG): container finished" podID="78f18e55-a740-4fec-9739-82062db6f9d8" containerID="808401756aad4b0a647939b364a7cefe29951cdd5fb2ecd75a2b76864d2014d6" exitCode=0 Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.098236 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" event={"ID":"78f18e55-a740-4fec-9739-82062db6f9d8","Type":"ContainerDied","Data":"808401756aad4b0a647939b364a7cefe29951cdd5fb2ecd75a2b76864d2014d6"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.102009 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2499-account-create-update-j6xhq" podStartSLOduration=8.101986853 podStartE2EDuration="8.101986853s" podCreationTimestamp="2026-03-18 14:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.094509155 +0000 UTC m=+1493.223637612" watchObservedRunningTime="2026-03-18 14:25:09.101986853 +0000 UTC m=+1493.231115300" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.110530 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9zfmn" event={"ID":"781bd548-5b56-4f74-b1a2-2228b7890b3a","Type":"ContainerStarted","Data":"eae50aaab562ae921e74e95fbfe9cd685e1a1337f5cdec8279e4de6f7759ccdb"} Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.122562 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-s5fvr" podStartSLOduration=7.122539249 podStartE2EDuration="7.122539249s" podCreationTimestamp="2026-03-18 14:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.114683992 +0000 UTC m=+1493.243812469" watchObservedRunningTime="2026-03-18 14:25:09.122539249 +0000 UTC m=+1493.251667706" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.154736 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0a9b-account-create-update-6lftr" podStartSLOduration=7.154708606 podStartE2EDuration="7.154708606s" podCreationTimestamp="2026-03-18 14:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.148251844 +0000 UTC m=+1493.277380291" watchObservedRunningTime="2026-03-18 14:25:09.154708606 +0000 UTC m=+1493.283837063" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.176739 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f0bc-account-create-update-4lkqr" podStartSLOduration=7.176713019 podStartE2EDuration="7.176713019s" podCreationTimestamp="2026-03-18 14:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.168239966 +0000 UTC m=+1493.297368423" watchObservedRunningTime="2026-03-18 14:25:09.176713019 +0000 UTC m=+1493.305841476" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.195828 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c34307-8027-4a3d-a786-1576b61224a0" path="/var/lib/kubelet/pods/63c34307-8027-4a3d-a786-1576b61224a0/volumes" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.195930 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-9zfmn" podStartSLOduration=7.195895651 podStartE2EDuration="7.195895651s" podCreationTimestamp="2026-03-18 14:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:09.189326326 +0000 UTC m=+1493.318454813" watchObservedRunningTime="2026-03-18 14:25:09.195895651 +0000 UTC m=+1493.325024108" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.846672 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bg6sl"] Mar 18 14:25:09 crc kubenswrapper[4857]: E0318 14:25:09.847222 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="init" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.847242 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="init" Mar 18 14:25:09 crc kubenswrapper[4857]: E0318 14:25:09.847259 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="dnsmasq-dns" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.847266 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="dnsmasq-dns" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.847538 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c34307-8027-4a3d-a786-1576b61224a0" containerName="dnsmasq-dns" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.848484 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.881721 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.889272 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptrj\" (UniqueName: \"kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.889465 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.897151 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bg6sl"] Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.991633 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.991805 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptrj\" (UniqueName: \"kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:09 crc kubenswrapper[4857]: I0318 14:25:09.993371 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.017691 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptrj\" (UniqueName: \"kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj\") pod \"root-account-create-update-bg6sl\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.148133 4857 generic.go:334] "Generic (PLEG): container finished" podID="9bded09f-2eca-4e52-b648-a21c151b61b6" containerID="46026b1d619aa0af9234a73fd654abcb1c8aacb5d3b8d9552503983a86d7a042" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.148240 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5fvr" event={"ID":"9bded09f-2eca-4e52-b648-a21c151b61b6","Type":"ContainerDied","Data":"46026b1d619aa0af9234a73fd654abcb1c8aacb5d3b8d9552503983a86d7a042"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.150736 4857 generic.go:334] "Generic (PLEG): container finished" podID="b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" containerID="98f7202f69d620bf3aaade18d3ac96490d85c235823083bf22ab32bc0897ef45" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.150796 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2499-account-create-update-j6xhq" event={"ID":"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9","Type":"ContainerDied","Data":"98f7202f69d620bf3aaade18d3ac96490d85c235823083bf22ab32bc0897ef45"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.153857 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ceaa02e5-9dc8-4200-a963-075794c1e822","Type":"ContainerStarted","Data":"b9a80551aa325335dce53bc1c9b3537836fa2b7cae138e0ecedfa0f38796c279"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.156461 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.158723 4857 generic.go:334] "Generic (PLEG): container finished" podID="2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" containerID="4b72b30ab7e5a386c534851b7a2854588f5cfba416e11b4035413c27b369c3a0" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.158867 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a9b-account-create-update-6lftr" event={"ID":"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530","Type":"ContainerDied","Data":"4b72b30ab7e5a386c534851b7a2854588f5cfba416e11b4035413c27b369c3a0"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.161502 4857 generic.go:334] "Generic (PLEG): container finished" podID="a515a015-c680-4c7b-bdd6-ce46602b7e30" containerID="dda1b8b37c3c421ff3e0c2536377c26c77f08d76649a1d8a325e4e847d0f1763" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.161596 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f0bc-account-create-update-4lkqr" event={"ID":"a515a015-c680-4c7b-bdd6-ce46602b7e30","Type":"ContainerDied","Data":"dda1b8b37c3c421ff3e0c2536377c26c77f08d76649a1d8a325e4e847d0f1763"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.168058 4857 generic.go:334] "Generic (PLEG): container finished" podID="781bd548-5b56-4f74-b1a2-2228b7890b3a" containerID="eae50aaab562ae921e74e95fbfe9cd685e1a1337f5cdec8279e4de6f7759ccdb" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.168164 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9zfmn" event={"ID":"781bd548-5b56-4f74-b1a2-2228b7890b3a","Type":"ContainerDied","Data":"eae50aaab562ae921e74e95fbfe9cd685e1a1337f5cdec8279e4de6f7759ccdb"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.172629 4857 generic.go:334] "Generic (PLEG): container finished" podID="943237f4-af1c-4d28-a5e1-5dc93d0d2c71" containerID="bed9b8d54107b7aad8ba44925a95ecb0c45f5be332d2c48fede93d6440e60bea" exitCode=0 Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.172903 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" event={"ID":"943237f4-af1c-4d28-a5e1-5dc93d0d2c71","Type":"ContainerDied","Data":"bed9b8d54107b7aad8ba44925a95ecb0c45f5be332d2c48fede93d6440e60bea"} Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.203299 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:10 crc kubenswrapper[4857]: I0318 14:25:10.779091 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.277478859 podStartE2EDuration="13.779020177s" podCreationTimestamp="2026-03-18 14:24:57 +0000 UTC" firstStartedPulling="2026-03-18 14:24:58.86827582 +0000 UTC m=+1482.997404277" lastFinishedPulling="2026-03-18 14:25:06.369817138 +0000 UTC m=+1490.498945595" observedRunningTime="2026-03-18 14:25:10.766513083 +0000 UTC m=+1494.895641540" watchObservedRunningTime="2026-03-18 14:25:10.779020177 +0000 UTC m=+1494.908148634" Mar 18 14:25:11 crc kubenswrapper[4857]: I0318 14:25:11.882895 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:25:11 crc kubenswrapper[4857]: E0318 14:25:11.912869 4857 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 14:25:11 crc kubenswrapper[4857]: E0318 14:25:11.912903 4857 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 14:25:11 crc kubenswrapper[4857]: E0318 14:25:11.912976 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift podName:1ca61c04-f56b-42c4-99fe-daa7f80436f7 nodeName:}" failed. No retries permitted until 2026-03-18 14:25:27.912958246 +0000 UTC m=+1512.042086703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift") pod "swift-storage-0" (UID: "1ca61c04-f56b-42c4-99fe-daa7f80436f7") : configmap "swift-ring-files" not found Mar 18 14:25:11 crc kubenswrapper[4857]: I0318 14:25:11.934264 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:11 crc kubenswrapper[4857]: I0318 14:25:11.958783 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.294672 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts\") pod \"3a5cc680-f973-4abe-a161-a19ac4036406\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.294897 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcr5m\" (UniqueName: \"kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m\") pod \"78f18e55-a740-4fec-9739-82062db6f9d8\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.294922 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7jdh\" (UniqueName: \"kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh\") pod \"3a5cc680-f973-4abe-a161-a19ac4036406\" (UID: \"3a5cc680-f973-4abe-a161-a19ac4036406\") " Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.295047 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts\") pod \"78f18e55-a740-4fec-9739-82062db6f9d8\" (UID: \"78f18e55-a740-4fec-9739-82062db6f9d8\") " Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.298799 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78f18e55-a740-4fec-9739-82062db6f9d8" (UID: "78f18e55-a740-4fec-9739-82062db6f9d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.303001 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a5cc680-f973-4abe-a161-a19ac4036406" (UID: "3a5cc680-f973-4abe-a161-a19ac4036406"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.326748 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m" (OuterVolumeSpecName: "kube-api-access-pcr5m") pod "78f18e55-a740-4fec-9739-82062db6f9d8" (UID: "78f18e55-a740-4fec-9739-82062db6f9d8"). InnerVolumeSpecName "kube-api-access-pcr5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.332712 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh" (OuterVolumeSpecName: "kube-api-access-n7jdh") pod "3a5cc680-f973-4abe-a161-a19ac4036406" (UID: "3a5cc680-f973-4abe-a161-a19ac4036406"). InnerVolumeSpecName "kube-api-access-n7jdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.352816 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" event={"ID":"78f18e55-a740-4fec-9739-82062db6f9d8","Type":"ContainerDied","Data":"71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970"} Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.352899 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71d8fee0816a21617f1f04e877fded9ad0da6fda2dee5a30fab7aa63e6e9a970" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.353033 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-8r2f9" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.359168 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5vwhx" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.359382 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5vwhx" event={"ID":"3a5cc680-f973-4abe-a161-a19ac4036406","Type":"ContainerDied","Data":"bbd36cd8b7b5cc0d348fcc7110539e0fccaf358be4b4756723d0e1b2e6b6bbdf"} Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.359439 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd36cd8b7b5cc0d348fcc7110539e0fccaf358be4b4756723d0e1b2e6b6bbdf" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.398067 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a5cc680-f973-4abe-a161-a19ac4036406-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.398121 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcr5m\" (UniqueName: \"kubernetes.io/projected/78f18e55-a740-4fec-9739-82062db6f9d8-kube-api-access-pcr5m\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.398166 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7jdh\" (UniqueName: \"kubernetes.io/projected/3a5cc680-f973-4abe-a161-a19ac4036406-kube-api-access-n7jdh\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.398186 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f18e55-a740-4fec-9739-82062db6f9d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:12 crc kubenswrapper[4857]: E0318 14:25:12.529494 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78f18e55_a740_4fec_9739_82062db6f9d8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a5cc680_f973_4abe_a161_a19ac4036406.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:25:12 crc kubenswrapper[4857]: I0318 14:25:12.965713 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.039378 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.053860 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:13 crc kubenswrapper[4857]: W0318 14:25:13.057043 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d41cde7_bc91_40e2_bdc8_f419aee0593a.slice/crio-177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf WatchSource:0}: Error finding container 177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf: Status 404 returned error can't find the container with id 177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.071822 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.112805 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.122024 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bg6sl"] Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.124075 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.145861 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts\") pod \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.146038 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts\") pod \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.146156 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p96lf\" (UniqueName: \"kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf\") pod \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\" (UID: \"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.146248 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb9vr\" (UniqueName: \"kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr\") pod \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\" (UID: \"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.147466 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" (UID: "2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.147588 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" (UID: "b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.169879 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf" (OuterVolumeSpecName: "kube-api-access-p96lf") pod "2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" (UID: "2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530"). InnerVolumeSpecName "kube-api-access-p96lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.188206 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr" (OuterVolumeSpecName: "kube-api-access-cb9vr") pod "b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" (UID: "b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9"). InnerVolumeSpecName "kube-api-access-cb9vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249416 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppwwz\" (UniqueName: \"kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz\") pod \"9bded09f-2eca-4e52-b648-a21c151b61b6\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249514 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts\") pod \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249562 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6cls\" (UniqueName: \"kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls\") pod \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\" (UID: \"943237f4-af1c-4d28-a5e1-5dc93d0d2c71\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249629 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts\") pod \"a515a015-c680-4c7b-bdd6-ce46602b7e30\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249720 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts\") pod \"9bded09f-2eca-4e52-b648-a21c151b61b6\" (UID: \"9bded09f-2eca-4e52-b648-a21c151b61b6\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249772 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dpl2\" (UniqueName: \"kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2\") pod \"781bd548-5b56-4f74-b1a2-2228b7890b3a\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249799 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk4rz\" (UniqueName: \"kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz\") pod \"a515a015-c680-4c7b-bdd6-ce46602b7e30\" (UID: \"a515a015-c680-4c7b-bdd6-ce46602b7e30\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.249900 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts\") pod \"781bd548-5b56-4f74-b1a2-2228b7890b3a\" (UID: \"781bd548-5b56-4f74-b1a2-2228b7890b3a\") " Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.250419 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb9vr\" (UniqueName: \"kubernetes.io/projected/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-kube-api-access-cb9vr\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.250441 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.250450 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.250459 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p96lf\" (UniqueName: \"kubernetes.io/projected/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530-kube-api-access-p96lf\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.250951 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "781bd548-5b56-4f74-b1a2-2228b7890b3a" (UID: "781bd548-5b56-4f74-b1a2-2228b7890b3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.251349 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a515a015-c680-4c7b-bdd6-ce46602b7e30" (UID: "a515a015-c680-4c7b-bdd6-ce46602b7e30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.251774 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bded09f-2eca-4e52-b648-a21c151b61b6" (UID: "9bded09f-2eca-4e52-b648-a21c151b61b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.253311 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "943237f4-af1c-4d28-a5e1-5dc93d0d2c71" (UID: "943237f4-af1c-4d28-a5e1-5dc93d0d2c71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.254840 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls" (OuterVolumeSpecName: "kube-api-access-l6cls") pod "943237f4-af1c-4d28-a5e1-5dc93d0d2c71" (UID: "943237f4-af1c-4d28-a5e1-5dc93d0d2c71"). InnerVolumeSpecName "kube-api-access-l6cls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.255599 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz" (OuterVolumeSpecName: "kube-api-access-ppwwz") pod "9bded09f-2eca-4e52-b648-a21c151b61b6" (UID: "9bded09f-2eca-4e52-b648-a21c151b61b6"). InnerVolumeSpecName "kube-api-access-ppwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.257707 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2" (OuterVolumeSpecName: "kube-api-access-5dpl2") pod "781bd548-5b56-4f74-b1a2-2228b7890b3a" (UID: "781bd548-5b56-4f74-b1a2-2228b7890b3a"). InnerVolumeSpecName "kube-api-access-5dpl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.265685 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz" (OuterVolumeSpecName: "kube-api-access-nk4rz") pod "a515a015-c680-4c7b-bdd6-ce46602b7e30" (UID: "a515a015-c680-4c7b-bdd6-ce46602b7e30"). InnerVolumeSpecName "kube-api-access-nk4rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689745 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dpl2\" (UniqueName: \"kubernetes.io/projected/781bd548-5b56-4f74-b1a2-2228b7890b3a-kube-api-access-5dpl2\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689816 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk4rz\" (UniqueName: \"kubernetes.io/projected/a515a015-c680-4c7b-bdd6-ce46602b7e30-kube-api-access-nk4rz\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689829 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/781bd548-5b56-4f74-b1a2-2228b7890b3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689841 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppwwz\" (UniqueName: \"kubernetes.io/projected/9bded09f-2eca-4e52-b648-a21c151b61b6-kube-api-access-ppwwz\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689852 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689863 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6cls\" (UniqueName: \"kubernetes.io/projected/943237f4-af1c-4d28-a5e1-5dc93d0d2c71-kube-api-access-l6cls\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689874 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a515a015-c680-4c7b-bdd6-ce46602b7e30-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.689884 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bded09f-2eca-4e52-b648-a21c151b61b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.706708 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bg6sl" event={"ID":"4d41cde7-bc91-40e2-bdc8-f419aee0593a","Type":"ContainerStarted","Data":"4c3d93b778fe19f2a7d569bb60fd9222a0383034cc78733875215cbc024ade3a"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.707037 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bg6sl" event={"ID":"4d41cde7-bc91-40e2-bdc8-f419aee0593a","Type":"ContainerStarted","Data":"177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.709160 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-s5fvr" event={"ID":"9bded09f-2eca-4e52-b648-a21c151b61b6","Type":"ContainerDied","Data":"4a84a9c7910cb3c5afa9fb5c0975b58b55cb0ac74685505a0f8751ef4f7d93ec"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.709193 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a84a9c7910cb3c5afa9fb5c0975b58b55cb0ac74685505a0f8751ef4f7d93ec" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.709238 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-s5fvr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.715095 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2499-account-create-update-j6xhq" event={"ID":"b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9","Type":"ContainerDied","Data":"b37c7ea4ba86a4bd0dc0c9e624ab5238c152b4dd24b2d115ed350ab42fde76df"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.715152 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37c7ea4ba86a4bd0dc0c9e624ab5238c152b4dd24b2d115ed350ab42fde76df" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.715214 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2499-account-create-update-j6xhq" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.717057 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0a9b-account-create-update-6lftr" event={"ID":"2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530","Type":"ContainerDied","Data":"b72311fef750b232fe3a85c1575ba351bd35e609eab8f15cab54d2b2b98f4839"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.717103 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72311fef750b232fe3a85c1575ba351bd35e609eab8f15cab54d2b2b98f4839" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.717195 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0a9b-account-create-update-6lftr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.724488 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f0bc-account-create-update-4lkqr" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.725854 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f0bc-account-create-update-4lkqr" event={"ID":"a515a015-c680-4c7b-bdd6-ce46602b7e30","Type":"ContainerDied","Data":"75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.725899 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75d7c16906c71879f1d5e88e52455dbeaef3058761f27e4511aafee4c9ee640e" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.734988 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9zfmn" event={"ID":"781bd548-5b56-4f74-b1a2-2228b7890b3a","Type":"ContainerDied","Data":"847c68901c00088c0e5d8cf77f4c6f4108e8b2ddfb313be2d14b4a1f4f2fbaca"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.735032 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847c68901c00088c0e5d8cf77f4c6f4108e8b2ddfb313be2d14b4a1f4f2fbaca" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.735099 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9zfmn" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.741181 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" event={"ID":"943237f4-af1c-4d28-a5e1-5dc93d0d2c71","Type":"ContainerDied","Data":"a7eebca076855c035add0e50db5546af6c278cdb5ce131db476105cda45e19ba"} Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.741221 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7eebca076855c035add0e50db5546af6c278cdb5ce131db476105cda45e19ba" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.741299 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-6779-account-create-update-4dxfv" Mar 18 14:25:13 crc kubenswrapper[4857]: I0318 14:25:13.754073 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-bg6sl" podStartSLOduration=4.75404865 podStartE2EDuration="4.75404865s" podCreationTimestamp="2026-03-18 14:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:13.732066348 +0000 UTC m=+1497.861194805" watchObservedRunningTime="2026-03-18 14:25:13.75404865 +0000 UTC m=+1497.883177097" Mar 18 14:25:14 crc kubenswrapper[4857]: I0318 14:25:14.886988 4857 generic.go:334] "Generic (PLEG): container finished" podID="4d41cde7-bc91-40e2-bdc8-f419aee0593a" containerID="4c3d93b778fe19f2a7d569bb60fd9222a0383034cc78733875215cbc024ade3a" exitCode=0 Mar 18 14:25:14 crc kubenswrapper[4857]: I0318 14:25:14.887298 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bg6sl" event={"ID":"4d41cde7-bc91-40e2-bdc8-f419aee0593a","Type":"ContainerDied","Data":"4c3d93b778fe19f2a7d569bb60fd9222a0383034cc78733875215cbc024ade3a"} Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.323982 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc"] Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324457 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324475 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324491 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781bd548-5b56-4f74-b1a2-2228b7890b3a" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324497 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="781bd548-5b56-4f74-b1a2-2228b7890b3a" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324511 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f18e55-a740-4fec-9739-82062db6f9d8" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324517 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f18e55-a740-4fec-9739-82062db6f9d8" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324533 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a515a015-c680-4c7b-bdd6-ce46602b7e30" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324539 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a515a015-c680-4c7b-bdd6-ce46602b7e30" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324554 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324559 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324572 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5cc680-f973-4abe-a161-a19ac4036406" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324578 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5cc680-f973-4abe-a161-a19ac4036406" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324597 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bded09f-2eca-4e52-b648-a21c151b61b6" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324604 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bded09f-2eca-4e52-b648-a21c151b61b6" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: E0318 14:25:15.324612 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943237f4-af1c-4d28-a5e1-5dc93d0d2c71" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324618 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="943237f4-af1c-4d28-a5e1-5dc93d0d2c71" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324838 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="781bd548-5b56-4f74-b1a2-2228b7890b3a" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324852 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324865 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f18e55-a740-4fec-9739-82062db6f9d8" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324882 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="943237f4-af1c-4d28-a5e1-5dc93d0d2c71" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324893 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bded09f-2eca-4e52-b648-a21c151b61b6" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324902 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324911 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5cc680-f973-4abe-a161-a19ac4036406" containerName="mariadb-database-create" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.324920 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a515a015-c680-4c7b-bdd6-ce46602b7e30" containerName="mariadb-account-create-update" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.325689 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.338762 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc"] Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.430555 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b97d-account-create-update-8zhjd"] Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.432729 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.440290 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b97d-account-create-update-8zhjd"] Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.458136 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.495487 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.495668 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvlz\" (UniqueName: \"kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.597830 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvlz\" (UniqueName: \"kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.597991 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4cm\" (UniqueName: \"kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.598073 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.598384 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.599480 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.640809 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvlz\" (UniqueName: \"kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz\") pod \"mysqld-exporter-openstack-cell1-db-create-8d7cc\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.652404 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.700484 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4cm\" (UniqueName: \"kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.700603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.701798 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.726524 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4cm\" (UniqueName: \"kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm\") pod \"mysqld-exporter-b97d-account-create-update-8zhjd\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:15 crc kubenswrapper[4857]: I0318 14:25:15.749878 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:16 crc kubenswrapper[4857]: I0318 14:25:16.904979 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b97d-account-create-update-8zhjd"] Mar 18 14:25:16 crc kubenswrapper[4857]: I0318 14:25:16.913564 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc"] Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.098059 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kj5zp"] Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.105416 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.108722 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fkqhd" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.108983 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.119102 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kj5zp"] Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.386580 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.386837 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.386882 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29csh\" (UniqueName: \"kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.386917 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.450588 4857 generic.go:334] "Generic (PLEG): container finished" podID="865ce56e-0936-4018-9dd8-17343c925b91" containerID="7d1427952d362233c9d1826cf66228a45035946c097dd5362c988677f4388a9b" exitCode=0 Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.452714 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" event={"ID":"508c9be6-0f5e-47ba-b48b-0d28dbf92af3","Type":"ContainerStarted","Data":"7a0a30d7fc75cd34232723076ff00d6a7eb974732206a9210173815a12e45186"} Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.452776 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bg6sl" event={"ID":"4d41cde7-bc91-40e2-bdc8-f419aee0593a","Type":"ContainerDied","Data":"177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf"} Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.452796 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177e7e748b5f0d0f27f3d9b986827f28f93f0d7d947e62a994c7a9890a1fcaaf" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.452807 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" event={"ID":"bea36fa5-0ed9-4931-b618-f1731d9bfe49","Type":"ContainerStarted","Data":"9b99c05a196f3f02a644e42922890668ca312b22fe8e1e034b7274dde6bbd0db"} Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.452819 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerDied","Data":"7d1427952d362233c9d1826cf66228a45035946c097dd5362c988677f4388a9b"} Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.455017 4857 generic.go:334] "Generic (PLEG): container finished" podID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerID="513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c" exitCode=0 Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.455048 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerDied","Data":"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c"} Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.488548 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.488624 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29csh\" (UniqueName: \"kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.488667 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.488697 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.494811 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.495864 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.503425 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.507470 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.790720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29csh\" (UniqueName: \"kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh\") pod \"glance-db-sync-kj5zp\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.807039 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.915566 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lptrj\" (UniqueName: \"kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj\") pod \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.916642 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts\") pod \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\" (UID: \"4d41cde7-bc91-40e2-bdc8-f419aee0593a\") " Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.918057 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d41cde7-bc91-40e2-bdc8-f419aee0593a" (UID: "4d41cde7-bc91-40e2-bdc8-f419aee0593a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:17 crc kubenswrapper[4857]: I0318 14:25:17.920421 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj" (OuterVolumeSpecName: "kube-api-access-lptrj") pod "4d41cde7-bc91-40e2-bdc8-f419aee0593a" (UID: "4d41cde7-bc91-40e2-bdc8-f419aee0593a"). InnerVolumeSpecName "kube-api-access-lptrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.024495 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d41cde7-bc91-40e2-bdc8-f419aee0593a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.024535 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lptrj\" (UniqueName: \"kubernetes.io/projected/4d41cde7-bc91-40e2-bdc8-f419aee0593a-kube-api-access-lptrj\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.039613 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fkqhd" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.048066 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.425343 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.481337 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" event={"ID":"508c9be6-0f5e-47ba-b48b-0d28dbf92af3","Type":"ContainerStarted","Data":"5941f98344a30735f0ad088a35d4f8cd42468c17f5f86c4846e843649056a712"} Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.495306 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bg6sl" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.497378 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" event={"ID":"bea36fa5-0ed9-4931-b618-f1731d9bfe49","Type":"ContainerStarted","Data":"e93dba244742908f989d91d186808237faaf628a8f4def33c61b31ea1525b128"} Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.510339 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" podStartSLOduration=3.510316163 podStartE2EDuration="3.510316163s" podCreationTimestamp="2026-03-18 14:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:18.501862161 +0000 UTC m=+1502.630990628" watchObservedRunningTime="2026-03-18 14:25:18.510316163 +0000 UTC m=+1502.639444620" Mar 18 14:25:18 crc kubenswrapper[4857]: I0318 14:25:18.535899 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" podStartSLOduration=3.535874335 podStartE2EDuration="3.535874335s" podCreationTimestamp="2026-03-18 14:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:18.528857028 +0000 UTC m=+1502.657985485" watchObservedRunningTime="2026-03-18 14:25:18.535874335 +0000 UTC m=+1502.665002782" Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.180846 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kj5zp"] Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.505402 4857 generic.go:334] "Generic (PLEG): container finished" podID="bea36fa5-0ed9-4931-b618-f1731d9bfe49" containerID="e93dba244742908f989d91d186808237faaf628a8f4def33c61b31ea1525b128" exitCode=0 Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.505696 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" event={"ID":"bea36fa5-0ed9-4931-b618-f1731d9bfe49","Type":"ContainerDied","Data":"e93dba244742908f989d91d186808237faaf628a8f4def33c61b31ea1525b128"} Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.508709 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerStarted","Data":"a66aa366b37615f34867b83e13af04a4bf6bc0287e8447d4b5651f10313f4b1b"} Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.509633 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.512947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerStarted","Data":"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02"} Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.513599 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.515362 4857 generic.go:334] "Generic (PLEG): container finished" podID="508c9be6-0f5e-47ba-b48b-0d28dbf92af3" containerID="5941f98344a30735f0ad088a35d4f8cd42468c17f5f86c4846e843649056a712" exitCode=0 Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.515538 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" event={"ID":"508c9be6-0f5e-47ba-b48b-0d28dbf92af3","Type":"ContainerDied","Data":"5941f98344a30735f0ad088a35d4f8cd42468c17f5f86c4846e843649056a712"} Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.517257 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kj5zp" event={"ID":"db23dd3d-8bc7-41ba-9e68-888a9ddb984a","Type":"ContainerStarted","Data":"53c3d20f87b254ebbfbcf8d362b80541311a7afe3c3ac455338f7bd0c3213558"} Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.555935 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.57800036 podStartE2EDuration="1m31.555915645s" podCreationTimestamp="2026-03-18 14:23:48 +0000 UTC" firstStartedPulling="2026-03-18 14:23:50.837135445 +0000 UTC m=+1414.966263902" lastFinishedPulling="2026-03-18 14:24:27.81505071 +0000 UTC m=+1451.944179187" observedRunningTime="2026-03-18 14:25:19.549389361 +0000 UTC m=+1503.678517818" watchObservedRunningTime="2026-03-18 14:25:19.555915645 +0000 UTC m=+1503.685044102" Mar 18 14:25:19 crc kubenswrapper[4857]: I0318 14:25:19.577867 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371944.27693 podStartE2EDuration="1m32.577845415s" podCreationTimestamp="2026-03-18 14:23:47 +0000 UTC" firstStartedPulling="2026-03-18 14:23:50.75050749 +0000 UTC m=+1414.879635947" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:19.573005444 +0000 UTC m=+1503.702133901" watchObservedRunningTime="2026-03-18 14:25:19.577845415 +0000 UTC m=+1503.706973872" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.230650 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bg6sl"] Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.235565 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.241787 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.242569 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bg6sl"] Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.424411 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts\") pod \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.424480 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4cm\" (UniqueName: \"kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm\") pod \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\" (UID: \"508c9be6-0f5e-47ba-b48b-0d28dbf92af3\") " Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.424599 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvlz\" (UniqueName: \"kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz\") pod \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.424675 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts\") pod \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\" (UID: \"bea36fa5-0ed9-4931-b618-f1731d9bfe49\") " Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.425658 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bea36fa5-0ed9-4931-b618-f1731d9bfe49" (UID: "bea36fa5-0ed9-4931-b618-f1731d9bfe49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.426326 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "508c9be6-0f5e-47ba-b48b-0d28dbf92af3" (UID: "508c9be6-0f5e-47ba-b48b-0d28dbf92af3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.434088 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm" (OuterVolumeSpecName: "kube-api-access-2g4cm") pod "508c9be6-0f5e-47ba-b48b-0d28dbf92af3" (UID: "508c9be6-0f5e-47ba-b48b-0d28dbf92af3"). InnerVolumeSpecName "kube-api-access-2g4cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.445074 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz" (OuterVolumeSpecName: "kube-api-access-qwvlz") pod "bea36fa5-0ed9-4931-b618-f1731d9bfe49" (UID: "bea36fa5-0ed9-4931-b618-f1731d9bfe49"). InnerVolumeSpecName "kube-api-access-qwvlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.527058 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.527091 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g4cm\" (UniqueName: \"kubernetes.io/projected/508c9be6-0f5e-47ba-b48b-0d28dbf92af3-kube-api-access-2g4cm\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.527104 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvlz\" (UniqueName: \"kubernetes.io/projected/bea36fa5-0ed9-4931-b618-f1731d9bfe49-kube-api-access-qwvlz\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.527114 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bea36fa5-0ed9-4931-b618-f1731d9bfe49-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.553102 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" event={"ID":"bea36fa5-0ed9-4931-b618-f1731d9bfe49","Type":"ContainerDied","Data":"9b99c05a196f3f02a644e42922890668ca312b22fe8e1e034b7274dde6bbd0db"} Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.553154 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b99c05a196f3f02a644e42922890668ca312b22fe8e1e034b7274dde6bbd0db" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.553231 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.567721 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" event={"ID":"508c9be6-0f5e-47ba-b48b-0d28dbf92af3","Type":"ContainerDied","Data":"7a0a30d7fc75cd34232723076ff00d6a7eb974732206a9210173815a12e45186"} Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.567785 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0a30d7fc75cd34232723076ff00d6a7eb974732206a9210173815a12e45186" Mar 18 14:25:21 crc kubenswrapper[4857]: I0318 14:25:21.567863 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b97d-account-create-update-8zhjd" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.150579 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jvjlg" podUID="635e665d-2bdc-4e46-913d-0362aa4d4e3d" containerName="ovn-controller" probeResult="failure" output=< Mar 18 14:25:22 crc kubenswrapper[4857]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 18 14:25:22 crc kubenswrapper[4857]: > Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.168854 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.171152 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7z7fh" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.443655 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jvjlg-config-rz94t"] Mar 18 14:25:22 crc kubenswrapper[4857]: E0318 14:25:22.444907 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea36fa5-0ed9-4931-b618-f1731d9bfe49" containerName="mariadb-database-create" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.444928 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea36fa5-0ed9-4931-b618-f1731d9bfe49" containerName="mariadb-database-create" Mar 18 14:25:22 crc kubenswrapper[4857]: E0318 14:25:22.444956 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d41cde7-bc91-40e2-bdc8-f419aee0593a" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.444964 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d41cde7-bc91-40e2-bdc8-f419aee0593a" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: E0318 14:25:22.445006 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508c9be6-0f5e-47ba-b48b-0d28dbf92af3" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.445012 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="508c9be6-0f5e-47ba-b48b-0d28dbf92af3" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.445408 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d41cde7-bc91-40e2-bdc8-f419aee0593a" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.445437 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea36fa5-0ed9-4931-b618-f1731d9bfe49" containerName="mariadb-database-create" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.445478 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="508c9be6-0f5e-47ba-b48b-0d28dbf92af3" containerName="mariadb-account-create-update" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.446622 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.449776 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.458135 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jvjlg-config-rz94t"] Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.782014 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.782607 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.782712 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2wg\" (UniqueName: \"kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.782905 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.782996 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.783202 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.812280 4857 generic.go:334] "Generic (PLEG): container finished" podID="04d9193e-1a5e-4943-9241-05e854fb24cb" containerID="439d2c72bcf758f6b2bbe27f8c3f39ae940747e8e5f0f4c0f28494c071b55662" exitCode=0 Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.812677 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qmp52" event={"ID":"04d9193e-1a5e-4943-9241-05e854fb24cb","Type":"ContainerDied","Data":"439d2c72bcf758f6b2bbe27f8c3f39ae940747e8e5f0f4c0f28494c071b55662"} Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.816983 4857 generic.go:334] "Generic (PLEG): container finished" podID="062e357c-5b17-403b-add2-71ce46b3423a" containerID="271778425daaf4fd5103cf0e854ebbdd9d1759a853d19656e12ae26244a5f2f6" exitCode=0 Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.817261 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerDied","Data":"271778425daaf4fd5103cf0e854ebbdd9d1759a853d19656e12ae26244a5f2f6"} Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.827980 4857 generic.go:334] "Generic (PLEG): container finished" podID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerID="f938c7ba217900403aaae4bef2fa16d3971dcaa20a53f6ecbd6cce1225c680a7" exitCode=0 Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.828476 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerDied","Data":"f938c7ba217900403aaae4bef2fa16d3971dcaa20a53f6ecbd6cce1225c680a7"} Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.885870 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886148 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886214 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886283 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886376 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886394 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc2wg\" (UniqueName: \"kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886892 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.886940 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.887782 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.887834 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.889355 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:22 crc kubenswrapper[4857]: I0318 14:25:22.940496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc2wg\" (UniqueName: \"kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg\") pod \"ovn-controller-jvjlg-config-rz94t\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.105479 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.186492 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d41cde7-bc91-40e2-bdc8-f419aee0593a" path="/var/lib/kubelet/pods/4d41cde7-bc91-40e2-bdc8-f419aee0593a/volumes" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.606235 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jvjlg-config-rz94t"] Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.850907 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerStarted","Data":"2ed308be836bd7991f890aa94f9af0da26f437f5b82d59f01acd49062cb12c2f"} Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.851452 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.858064 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jvjlg-config-rz94t" event={"ID":"2c0625f6-84b9-4585-aa9f-efff3bf8940a","Type":"ContainerStarted","Data":"01d19da2f6bd7317d1b59b6e4b8b7e85419e6b4828bfc2dbb7957fe2b511072a"} Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.862234 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerStarted","Data":"7e45f48edb184b3f99d6359ccd7e9ebc2ef57a7227a04ffb4564d796cb97a864"} Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.863105 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.898439 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371940.956564 podStartE2EDuration="1m35.898211835s" podCreationTimestamp="2026-03-18 14:23:48 +0000 UTC" firstStartedPulling="2026-03-18 14:23:51.024241933 +0000 UTC m=+1415.153370390" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:23.897019865 +0000 UTC m=+1508.026148322" watchObservedRunningTime="2026-03-18 14:25:23.898211835 +0000 UTC m=+1508.027340292" Mar 18 14:25:23 crc kubenswrapper[4857]: I0318 14:25:23.939966 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371940.914837 podStartE2EDuration="1m35.939939763s" podCreationTimestamp="2026-03-18 14:23:48 +0000 UTC" firstStartedPulling="2026-03-18 14:23:51.512511501 +0000 UTC m=+1415.641639958" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:23.929954792 +0000 UTC m=+1508.059083249" watchObservedRunningTime="2026-03-18 14:25:23.939939763 +0000 UTC m=+1508.069068220" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.350277 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431268 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431366 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c442r\" (UniqueName: \"kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431417 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431531 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431669 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431720 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.431816 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift\") pod \"04d9193e-1a5e-4943-9241-05e854fb24cb\" (UID: \"04d9193e-1a5e-4943-9241-05e854fb24cb\") " Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.433266 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.439680 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.463236 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts" (OuterVolumeSpecName: "scripts") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.470423 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r" (OuterVolumeSpecName: "kube-api-access-c442r") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "kube-api-access-c442r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.480233 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.518913 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.528033 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04d9193e-1a5e-4943-9241-05e854fb24cb" (UID: "04d9193e-1a5e-4943-9241-05e854fb24cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535041 4857 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535239 4857 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/04d9193e-1a5e-4943-9241-05e854fb24cb-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535284 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535300 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c442r\" (UniqueName: \"kubernetes.io/projected/04d9193e-1a5e-4943-9241-05e854fb24cb-kube-api-access-c442r\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535318 4857 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/04d9193e-1a5e-4943-9241-05e854fb24cb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535330 4857 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.535341 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9193e-1a5e-4943-9241-05e854fb24cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.927677 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qmp52" event={"ID":"04d9193e-1a5e-4943-9241-05e854fb24cb","Type":"ContainerDied","Data":"f37b450986e2e9faf49b6c39d3bfce6bcfbe8ced0039062ebc398521c37e82f5"} Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.928203 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f37b450986e2e9faf49b6c39d3bfce6bcfbe8ced0039062ebc398521c37e82f5" Mar 18 14:25:24 crc kubenswrapper[4857]: I0318 14:25:24.928573 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qmp52" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.692660 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:25:25 crc kubenswrapper[4857]: E0318 14:25:25.693642 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d9193e-1a5e-4943-9241-05e854fb24cb" containerName="swift-ring-rebalance" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.693667 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d9193e-1a5e-4943-9241-05e854fb24cb" containerName="swift-ring-rebalance" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.693942 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d9193e-1a5e-4943-9241-05e854fb24cb" containerName="swift-ring-rebalance" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.694975 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.697209 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.714973 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.760126 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.760258 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.760330 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x78f\" (UniqueName: \"kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.862164 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x78f\" (UniqueName: \"kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.862335 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.862422 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.883328 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.883617 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:25 crc kubenswrapper[4857]: I0318 14:25:25.888504 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x78f\" (UniqueName: \"kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f\") pod \"mysqld-exporter-0\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " pod="openstack/mysqld-exporter-0" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.017729 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.243489 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6t5nq"] Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.245264 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.249825 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.272885 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6t5nq"] Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.289503 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.289647 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzm4\" (UniqueName: \"kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.396046 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.396129 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzm4\" (UniqueName: \"kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.397208 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.422062 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzm4\" (UniqueName: \"kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4\") pod \"root-account-create-update-6t5nq\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.580972 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.810397 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.949845 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fc05a021-e410-4413-8e09-99db47cc4ee5","Type":"ContainerStarted","Data":"2f7c4034095106fbb2f8043c1ccfce87666bbe26113a2b18efca506fc50ea73e"} Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.953215 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c0625f6-84b9-4585-aa9f-efff3bf8940a" containerID="6d753f6109529497f751c11d6a669fafc3bb1d671dc0ae02f2fc754c6d5e54be" exitCode=0 Mar 18 14:25:26 crc kubenswrapper[4857]: I0318 14:25:26.953368 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jvjlg-config-rz94t" event={"ID":"2c0625f6-84b9-4585-aa9f-efff3bf8940a","Type":"ContainerDied","Data":"6d753f6109529497f751c11d6a669fafc3bb1d671dc0ae02f2fc754c6d5e54be"} Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.141159 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jvjlg" Mar 18 14:25:27 crc kubenswrapper[4857]: W0318 14:25:27.279018 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70ca8507_d904_4c86_b90e_7348e4e0d0e9.slice/crio-9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e WatchSource:0}: Error finding container 9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e: Status 404 returned error can't find the container with id 9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.304796 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6t5nq"] Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.936732 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.953618 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ca61c04-f56b-42c4-99fe-daa7f80436f7-etc-swift\") pod \"swift-storage-0\" (UID: \"1ca61c04-f56b-42c4-99fe-daa7f80436f7\") " pod="openstack/swift-storage-0" Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.974289 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6t5nq" event={"ID":"70ca8507-d904-4c86-b90e-7348e4e0d0e9","Type":"ContainerStarted","Data":"7915919b5e2c4256a4913cf9bc37216c45f1a3ddff357221a2d09b5c1fa1c37c"} Mar 18 14:25:27 crc kubenswrapper[4857]: I0318 14:25:27.974331 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6t5nq" event={"ID":"70ca8507-d904-4c86-b90e-7348e4e0d0e9","Type":"ContainerStarted","Data":"9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e"} Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.016428 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6t5nq" podStartSLOduration=2.016405109 podStartE2EDuration="2.016405109s" podCreationTimestamp="2026-03-18 14:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:28.013434164 +0000 UTC m=+1512.142562621" watchObservedRunningTime="2026-03-18 14:25:28.016405109 +0000 UTC m=+1512.145533566" Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.038361 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.986158 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jvjlg-config-rz94t" event={"ID":"2c0625f6-84b9-4585-aa9f-efff3bf8940a","Type":"ContainerDied","Data":"01d19da2f6bd7317d1b59b6e4b8b7e85419e6b4828bfc2dbb7957fe2b511072a"} Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.986411 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d19da2f6bd7317d1b59b6e4b8b7e85419e6b4828bfc2dbb7957fe2b511072a" Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.988945 4857 generic.go:334] "Generic (PLEG): container finished" podID="70ca8507-d904-4c86-b90e-7348e4e0d0e9" containerID="7915919b5e2c4256a4913cf9bc37216c45f1a3ddff357221a2d09b5c1fa1c37c" exitCode=0 Mar 18 14:25:28 crc kubenswrapper[4857]: I0318 14:25:28.988977 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6t5nq" event={"ID":"70ca8507-d904-4c86-b90e-7348e4e0d0e9","Type":"ContainerDied","Data":"7915919b5e2c4256a4913cf9bc37216c45f1a3ddff357221a2d09b5c1fa1c37c"} Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.206704 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.273731 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc2wg\" (UniqueName: \"kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.273947 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274002 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274052 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274103 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run\") pod \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\" (UID: \"2c0625f6-84b9-4585-aa9f-efff3bf8940a\") " Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274853 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run" (OuterVolumeSpecName: "var-run") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274903 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.274931 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.276027 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts" (OuterVolumeSpecName: "scripts") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.276235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.282297 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg" (OuterVolumeSpecName: "kube-api-access-gc2wg") pod "2c0625f6-84b9-4585-aa9f-efff3bf8940a" (UID: "2c0625f6-84b9-4585-aa9f-efff3bf8940a"). InnerVolumeSpecName "kube-api-access-gc2wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377036 4857 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377084 4857 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377097 4857 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377114 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c0625f6-84b9-4585-aa9f-efff3bf8940a-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377129 4857 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c0625f6-84b9-4585-aa9f-efff3bf8940a-var-run\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.377144 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc2wg\" (UniqueName: \"kubernetes.io/projected/2c0625f6-84b9-4585-aa9f-efff3bf8940a-kube-api-access-gc2wg\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.430037 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.491434 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.629539 4857 scope.go:117] "RemoveContainer" containerID="abb983acc94350e27db98b6ff12909c6f384ade6476def74aa3724215ed54d39" Mar 18 14:25:29 crc kubenswrapper[4857]: I0318 14:25:29.714818 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.012874 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"9fcf0dce785839eb2b425c47341db762e8c2d92338da89602aa06affe17ed7c2"} Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.015136 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fc05a021-e410-4413-8e09-99db47cc4ee5","Type":"ContainerStarted","Data":"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3"} Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.015233 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jvjlg-config-rz94t" Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.058169 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.853235182 podStartE2EDuration="5.058134099s" podCreationTimestamp="2026-03-18 14:25:25 +0000 UTC" firstStartedPulling="2026-03-18 14:25:26.816987525 +0000 UTC m=+1510.946115982" lastFinishedPulling="2026-03-18 14:25:29.021886442 +0000 UTC m=+1513.151014899" observedRunningTime="2026-03-18 14:25:30.036773403 +0000 UTC m=+1514.165901870" watchObservedRunningTime="2026-03-18 14:25:30.058134099 +0000 UTC m=+1514.187262566" Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.358811 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jvjlg-config-rz94t"] Mar 18 14:25:30 crc kubenswrapper[4857]: I0318 14:25:30.373502 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jvjlg-config-rz94t"] Mar 18 14:25:31 crc kubenswrapper[4857]: I0318 14:25:31.174464 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0625f6-84b9-4585-aa9f-efff3bf8940a" path="/var/lib/kubelet/pods/2c0625f6-84b9-4585-aa9f-efff3bf8940a/volumes" Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.117318 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerStarted","Data":"a045dbbd00f6405c8f0f5ed58b21fded3543087ff5b8036556513c8bb5e9662a"} Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.713168 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.863256 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkzm4\" (UniqueName: \"kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4\") pod \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.865452 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts\") pod \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\" (UID: \"70ca8507-d904-4c86-b90e-7348e4e0d0e9\") " Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.867267 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70ca8507-d904-4c86-b90e-7348e4e0d0e9" (UID: "70ca8507-d904-4c86-b90e-7348e4e0d0e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.867940 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4" (OuterVolumeSpecName: "kube-api-access-gkzm4") pod "70ca8507-d904-4c86-b90e-7348e4e0d0e9" (UID: "70ca8507-d904-4c86-b90e-7348e4e0d0e9"). InnerVolumeSpecName "kube-api-access-gkzm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.872855 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70ca8507-d904-4c86-b90e-7348e4e0d0e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:38 crc kubenswrapper[4857]: I0318 14:25:38.872894 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkzm4\" (UniqueName: \"kubernetes.io/projected/70ca8507-d904-4c86-b90e-7348e4e0d0e9-kube-api-access-gkzm4\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.131897 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6t5nq" event={"ID":"70ca8507-d904-4c86-b90e-7348e4e0d0e9","Type":"ContainerDied","Data":"9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e"} Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.132222 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cef56ee209d2b56ece53df6649bd751252484f0fcc704e717c43144fa995c9e" Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.132296 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6t5nq" Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.492011 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.846967 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 18 14:25:39 crc kubenswrapper[4857]: I0318 14:25:39.860018 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 18 14:25:40 crc kubenswrapper[4857]: I0318 14:25:40.172035 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"436f01bd6b0219492df83d03bd6d6cb8c984cc26ff1422baaa31e905db97a204"} Mar 18 14:25:40 crc kubenswrapper[4857]: I0318 14:25:40.173073 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"4dee1320e3a8ea450b6c701e977d02f25567cc84a343c80563f01b0178708010"} Mar 18 14:25:40 crc kubenswrapper[4857]: I0318 14:25:40.173124 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"07ab49a0db52c0381c69b01c2f27035c1f8d81cb8e0772d0d5fbcca11f88b534"} Mar 18 14:25:40 crc kubenswrapper[4857]: I0318 14:25:40.175984 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kj5zp" event={"ID":"db23dd3d-8bc7-41ba-9e68-888a9ddb984a","Type":"ContainerStarted","Data":"6d587580fb0096e6795bce2b9720b3097b84311499130470fc770f7887ce7f7c"} Mar 18 14:25:40 crc kubenswrapper[4857]: I0318 14:25:40.208819 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kj5zp" podStartSLOduration=3.718866192 podStartE2EDuration="23.208789367s" podCreationTimestamp="2026-03-18 14:25:17 +0000 UTC" firstStartedPulling="2026-03-18 14:25:19.210956484 +0000 UTC m=+1503.340084931" lastFinishedPulling="2026-03-18 14:25:38.700879629 +0000 UTC m=+1522.830008106" observedRunningTime="2026-03-18 14:25:40.19539664 +0000 UTC m=+1524.324525097" watchObservedRunningTime="2026-03-18 14:25:40.208789367 +0000 UTC m=+1524.337917824" Mar 18 14:25:41 crc kubenswrapper[4857]: I0318 14:25:41.208834 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"8b56c5f88a845b30d9518a4f2a15ad4294de58cd7d3eeb87956565440dd8cd4c"} Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.071649 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-t9srg"] Mar 18 14:25:42 crc kubenswrapper[4857]: E0318 14:25:42.072418 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ca8507-d904-4c86-b90e-7348e4e0d0e9" containerName="mariadb-account-create-update" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.072440 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ca8507-d904-4c86-b90e-7348e4e0d0e9" containerName="mariadb-account-create-update" Mar 18 14:25:42 crc kubenswrapper[4857]: E0318 14:25:42.072488 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0625f6-84b9-4585-aa9f-efff3bf8940a" containerName="ovn-config" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.072495 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0625f6-84b9-4585-aa9f-efff3bf8940a" containerName="ovn-config" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.072739 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ca8507-d904-4c86-b90e-7348e4e0d0e9" containerName="mariadb-account-create-update" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.072783 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0625f6-84b9-4585-aa9f-efff3bf8940a" containerName="ovn-config" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.073625 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.087118 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-af02-account-create-update-bxg27"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.088561 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.102384 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-af02-account-create-update-bxg27"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.108952 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.113036 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-t9srg"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.147192 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dp47\" (UniqueName: \"kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.147307 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt76p\" (UniqueName: \"kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.147352 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.147440 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.185646 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.193107 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.249046 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250439 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt76p\" (UniqueName: \"kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzlc\" (UniqueName: \"kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250596 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250647 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250699 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.250817 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dp47\" (UniqueName: \"kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.261037 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sclfz"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.262477 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.262928 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.264511 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.274366 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sclfz"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.292929 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt76p\" (UniqueName: \"kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p\") pod \"heat-af02-account-create-update-bxg27\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.325483 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dp47\" (UniqueName: \"kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47\") pod \"heat-db-create-t9srg\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.359087 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzlc\" (UniqueName: \"kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.359484 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rrt\" (UniqueName: \"kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.359530 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.359603 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.359688 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.360152 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.360564 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.391684 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzlc\" (UniqueName: \"kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc\") pod \"redhat-operators-b98g7\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.399421 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cf79-account-create-update-qkb2x"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.401278 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.409939 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.413222 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cf79-account-create-update-qkb2x"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.462134 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crprl\" (UniqueName: \"kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.462502 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8rrt\" (UniqueName: \"kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.462642 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.462826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.472429 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.502099 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-v2q2q"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.503863 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.511344 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4kgzh" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.511562 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.511721 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.512474 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.513513 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8rrt\" (UniqueName: \"kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt\") pod \"cinder-db-create-sclfz\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.522895 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t9srg" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.529820 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v2q2q"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.564818 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77w4\" (UniqueName: \"kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.564903 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crprl\" (UniqueName: \"kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.565003 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.565063 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.565104 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.565904 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.576311 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.576618 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2hgxn"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.579611 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.589432 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.608280 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crprl\" (UniqueName: \"kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl\") pod \"cinder-cf79-account-create-update-qkb2x\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.622610 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fbf8-account-create-update-hvlxn"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.624541 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.632666 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.680352 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.680499 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.680611 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7jd\" (UniqueName: \"kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.680712 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bkv\" (UniqueName: \"kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.685416 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.686011 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.686192 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77w4\" (UniqueName: \"kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.686274 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.688943 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.698196 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2hgxn"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.710052 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77w4\" (UniqueName: \"kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4\") pod \"keystone-db-sync-v2q2q\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.710565 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fbf8-account-create-update-hvlxn"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.771950 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.781944 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6a01-account-create-update-bx5tc"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.797855 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.816489 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7jd\" (UniqueName: \"kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.816650 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bkv\" (UniqueName: \"kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.816720 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.816828 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.818419 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.820081 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.823140 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.832980 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-c8wq4"] Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.837892 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.850834 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7jd\" (UniqueName: \"kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd\") pod \"neutron-fbf8-account-create-update-hvlxn\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.878102 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bkv\" (UniqueName: \"kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv\") pod \"neutron-db-create-2hgxn\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.890997 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.898240 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.928435 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.928515 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znf5v\" (UniqueName: \"kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.929062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:42 crc kubenswrapper[4857]: I0318 14:25:42.964680 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.061679 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.078563 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.086904 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znf5v\" (UniqueName: \"kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.087248 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.087422 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlv2\" (UniqueName: \"kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.117514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znf5v\" (UniqueName: \"kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v\") pod \"barbican-6a01-account-create-update-bx5tc\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.117605 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6a01-account-create-update-bx5tc"] Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.147545 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.194915 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.195032 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmlv2\" (UniqueName: \"kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.196020 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.253063 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c8wq4"] Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.289804 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmlv2\" (UniqueName: \"kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2\") pod \"barbican-db-create-c8wq4\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.342404 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerStarted","Data":"4502bc50598283ef9116fbb9bfb782d2bfae62b49a1b0440934ae48f1c96f622"} Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.345198 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-t9srg"] Mar 18 14:25:43 crc kubenswrapper[4857]: W0318 14:25:43.411291 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da3dc31_b98c_4d11_8837_96fe5c7d8398.slice/crio-be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97 WatchSource:0}: Error finding container be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97: Status 404 returned error can't find the container with id be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97 Mar 18 14:25:43 crc kubenswrapper[4857]: I0318 14:25:43.563726 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.184381 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=8.279591187 podStartE2EDuration="1m51.184355829s" podCreationTimestamp="2026-03-18 14:23:53 +0000 UTC" firstStartedPulling="2026-03-18 14:23:59.03603356 +0000 UTC m=+1423.165162017" lastFinishedPulling="2026-03-18 14:25:41.940798172 +0000 UTC m=+1526.069926659" observedRunningTime="2026-03-18 14:25:43.379766039 +0000 UTC m=+1527.508894496" watchObservedRunningTime="2026-03-18 14:25:44.184355829 +0000 UTC m=+1528.313484286" Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.187734 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-af02-account-create-update-bxg27"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.223282 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sclfz"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.387584 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sclfz" event={"ID":"d01853ca-d154-4247-b6b5-d0af7407921d","Type":"ContainerStarted","Data":"5cde3c9ca11a1d8a8138d6dc3ea2a757685e8d612a1a475d1f632e32b28e7d6b"} Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.395969 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t9srg" event={"ID":"1da3dc31-b98c-4d11-8837-96fe5c7d8398","Type":"ContainerStarted","Data":"be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97"} Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.399397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-af02-account-create-update-bxg27" event={"ID":"fe1c4712-6135-41e6-9535-569379422bd7","Type":"ContainerStarted","Data":"ef85978c59000152fe6a6a941d62c07b749dd6096a7d1db53ff0bb5bb77195a6"} Mar 18 14:25:44 crc kubenswrapper[4857]: W0318 14:25:44.614982 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54bc8846_fa5e_4a90_af94_4b44e6bde172.slice/crio-c421186901b636497a5a327568858607b19a69fad96d585294380c3d408188ab WatchSource:0}: Error finding container c421186901b636497a5a327568858607b19a69fad96d585294380c3d408188ab: Status 404 returned error can't find the container with id c421186901b636497a5a327568858607b19a69fad96d585294380c3d408188ab Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.619528 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cf79-account-create-update-qkb2x"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.638444 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6a01-account-create-update-bx5tc"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.660794 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.677101 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c8wq4"] Mar 18 14:25:44 crc kubenswrapper[4857]: I0318 14:25:44.886529 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-v2q2q"] Mar 18 14:25:44 crc kubenswrapper[4857]: W0318 14:25:44.966110 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec10534a_1292_409a_adff_ecfac639275f.slice/crio-dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1 WatchSource:0}: Error finding container dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1: Status 404 returned error can't find the container with id dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1 Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.023195 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2hgxn"] Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.043722 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fbf8-account-create-update-hvlxn"] Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.424534 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cf79-account-create-update-qkb2x" event={"ID":"bab5d828-1730-4e36-a0a4-57704e03f6d9","Type":"ContainerStarted","Data":"bfefc711fdb8b00c14eb35c329385bcbf2fc37de6e0c746132826b7c0236c108"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.424953 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cf79-account-create-update-qkb2x" event={"ID":"bab5d828-1730-4e36-a0a4-57704e03f6d9","Type":"ContainerStarted","Data":"b0283c3cc66b05044362b5d05fb6d4f4706a4a7aee0679960d3d87f77e25643a"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.432199 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sclfz" event={"ID":"d01853ca-d154-4247-b6b5-d0af7407921d","Type":"ContainerStarted","Data":"0a2b398ef4eab5f964b86591a567b3bc647ed0df55363c7d317741cd0114aecc"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.435372 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8wq4" event={"ID":"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a","Type":"ContainerStarted","Data":"846f8f74499c5bd736571da677b7e6a970da24ff92e340084bf97839964a70e1"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.437350 4857 generic.go:334] "Generic (PLEG): container finished" podID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerID="9efc2c2ecd9674c1d03eb8ce6f52f0554f7edb68a20263e56e420fcbf28a83c7" exitCode=0 Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.437406 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerDied","Data":"9efc2c2ecd9674c1d03eb8ce6f52f0554f7edb68a20263e56e420fcbf28a83c7"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.437425 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerStarted","Data":"c421186901b636497a5a327568858607b19a69fad96d585294380c3d408188ab"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.441409 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf8-account-create-update-hvlxn" event={"ID":"6b313e17-3867-49ca-81b4-a35f89dd5b12","Type":"ContainerStarted","Data":"9c36142a9c12005934e1c0f84c9ea24195b92e778d8efea23559f2647e34e681"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.448012 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t9srg" event={"ID":"1da3dc31-b98c-4d11-8837-96fe5c7d8398","Type":"ContainerStarted","Data":"2d2524fa4901d7edb670f699c8b4d0504848364779bcc5f971fa80cd7332ba05"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.454802 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a01-account-create-update-bx5tc" event={"ID":"88b3029f-dd60-425d-b002-6f1b9a6af1b2","Type":"ContainerStarted","Data":"5cb7c97a7725417d60b3a1f16a92616e6352fe2341935a67abeb8b65ed3a0c9d"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.454850 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a01-account-create-update-bx5tc" event={"ID":"88b3029f-dd60-425d-b002-6f1b9a6af1b2","Type":"ContainerStarted","Data":"1b3a43e5aa6de10987da7ff94504be253a93a696a523305d3b8092afe6ecf4d9"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.458274 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cf79-account-create-update-qkb2x" podStartSLOduration=3.458243503 podStartE2EDuration="3.458243503s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:45.440858746 +0000 UTC m=+1529.569987203" watchObservedRunningTime="2026-03-18 14:25:45.458243503 +0000 UTC m=+1529.587371960" Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.465819 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v2q2q" event={"ID":"ec10534a-1292-409a-adff-ecfac639275f","Type":"ContainerStarted","Data":"dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.471506 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-af02-account-create-update-bxg27" event={"ID":"fe1c4712-6135-41e6-9535-569379422bd7","Type":"ContainerStarted","Data":"3d520738e8f28c1a015729d9cb42e4e5b3fc97ca82cdb471e0dc74d8a18e1ed2"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.475669 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2hgxn" event={"ID":"01b0f817-e54c-4f5a-89fb-026c01540ea8","Type":"ContainerStarted","Data":"cf1884ca331b5db89f139e1c4439ae8a8c8df953c187e2e05ba38d28bbc67e4a"} Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.478177 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-sclfz" podStartSLOduration=3.478150842 podStartE2EDuration="3.478150842s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:45.462986192 +0000 UTC m=+1529.592114659" watchObservedRunningTime="2026-03-18 14:25:45.478150842 +0000 UTC m=+1529.607279299" Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.531735 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-t9srg" podStartSLOduration=3.531700667 podStartE2EDuration="3.531700667s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:45.512021643 +0000 UTC m=+1529.641150100" watchObservedRunningTime="2026-03-18 14:25:45.531700667 +0000 UTC m=+1529.660829124" Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.542476 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-af02-account-create-update-bxg27" podStartSLOduration=3.542448367 podStartE2EDuration="3.542448367s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:45.541698158 +0000 UTC m=+1529.670826615" watchObservedRunningTime="2026-03-18 14:25:45.542448367 +0000 UTC m=+1529.671576824" Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.570294 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-6a01-account-create-update-bx5tc" podStartSLOduration=3.570266625 podStartE2EDuration="3.570266625s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:45.565099075 +0000 UTC m=+1529.694227532" watchObservedRunningTime="2026-03-18 14:25:45.570266625 +0000 UTC m=+1529.699395082" Mar 18 14:25:45 crc kubenswrapper[4857]: I0318 14:25:45.635294 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.492352 4857 generic.go:334] "Generic (PLEG): container finished" podID="1da3dc31-b98c-4d11-8837-96fe5c7d8398" containerID="2d2524fa4901d7edb670f699c8b4d0504848364779bcc5f971fa80cd7332ba05" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.492459 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t9srg" event={"ID":"1da3dc31-b98c-4d11-8837-96fe5c7d8398","Type":"ContainerDied","Data":"2d2524fa4901d7edb670f699c8b4d0504848364779bcc5f971fa80cd7332ba05"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.498500 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"7ae187a27e83e49676c153d87e50de3ada18c201e8ae9d4537c827b39ba054d1"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.501156 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe1c4712-6135-41e6-9535-569379422bd7" containerID="3d520738e8f28c1a015729d9cb42e4e5b3fc97ca82cdb471e0dc74d8a18e1ed2" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.501248 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-af02-account-create-update-bxg27" event={"ID":"fe1c4712-6135-41e6-9535-569379422bd7","Type":"ContainerDied","Data":"3d520738e8f28c1a015729d9cb42e4e5b3fc97ca82cdb471e0dc74d8a18e1ed2"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.505011 4857 generic.go:334] "Generic (PLEG): container finished" podID="01b0f817-e54c-4f5a-89fb-026c01540ea8" containerID="350af783b3d56bdab2e1a390c685ac1c3c3e5105287d40c8b40dce9d449ec1f1" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.505086 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2hgxn" event={"ID":"01b0f817-e54c-4f5a-89fb-026c01540ea8","Type":"ContainerDied","Data":"350af783b3d56bdab2e1a390c685ac1c3c3e5105287d40c8b40dce9d449ec1f1"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.512899 4857 generic.go:334] "Generic (PLEG): container finished" podID="bab5d828-1730-4e36-a0a4-57704e03f6d9" containerID="bfefc711fdb8b00c14eb35c329385bcbf2fc37de6e0c746132826b7c0236c108" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.513086 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cf79-account-create-update-qkb2x" event={"ID":"bab5d828-1730-4e36-a0a4-57704e03f6d9","Type":"ContainerDied","Data":"bfefc711fdb8b00c14eb35c329385bcbf2fc37de6e0c746132826b7c0236c108"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.518205 4857 generic.go:334] "Generic (PLEG): container finished" podID="a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" containerID="aef248f2872b50dd123e307693231889d888d754aa2606722b29923b214ea5ac" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.518289 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8wq4" event={"ID":"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a","Type":"ContainerDied","Data":"aef248f2872b50dd123e307693231889d888d754aa2606722b29923b214ea5ac"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.528767 4857 generic.go:334] "Generic (PLEG): container finished" podID="88b3029f-dd60-425d-b002-6f1b9a6af1b2" containerID="5cb7c97a7725417d60b3a1f16a92616e6352fe2341935a67abeb8b65ed3a0c9d" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.528892 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a01-account-create-update-bx5tc" event={"ID":"88b3029f-dd60-425d-b002-6f1b9a6af1b2","Type":"ContainerDied","Data":"5cb7c97a7725417d60b3a1f16a92616e6352fe2341935a67abeb8b65ed3a0c9d"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.531172 4857 generic.go:334] "Generic (PLEG): container finished" podID="6b313e17-3867-49ca-81b4-a35f89dd5b12" containerID="6eddf6230131bf022b9b2d44f744bb2ba66ac614e3a73865e06874993d9b25a2" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.531230 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf8-account-create-update-hvlxn" event={"ID":"6b313e17-3867-49ca-81b4-a35f89dd5b12","Type":"ContainerDied","Data":"6eddf6230131bf022b9b2d44f744bb2ba66ac614e3a73865e06874993d9b25a2"} Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.532554 4857 generic.go:334] "Generic (PLEG): container finished" podID="d01853ca-d154-4247-b6b5-d0af7407921d" containerID="0a2b398ef4eab5f964b86591a567b3bc647ed0df55363c7d317741cd0114aecc" exitCode=0 Mar 18 14:25:46 crc kubenswrapper[4857]: I0318 14:25:46.532587 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sclfz" event={"ID":"d01853ca-d154-4247-b6b5-d0af7407921d","Type":"ContainerDied","Data":"0a2b398ef4eab5f964b86591a567b3bc647ed0df55363c7d317741cd0114aecc"} Mar 18 14:25:47 crc kubenswrapper[4857]: I0318 14:25:47.550714 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"a1f9eecc1794c45ca39d7b8449dd213d3d50b43df0eee472aa913d9719ad5b40"} Mar 18 14:25:47 crc kubenswrapper[4857]: I0318 14:25:47.551043 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"15dc9978b9a49e5a6be003eaa312751e92646b42e996b5b7925fbf8f644fa4c8"} Mar 18 14:25:47 crc kubenswrapper[4857]: I0318 14:25:47.551065 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"0eee6962fad676fa389fcabfaa529445f1dec7579c68f73ac12170e0b199a195"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.064458 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t9srg" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.184228 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dp47\" (UniqueName: \"kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47\") pod \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.184324 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts\") pod \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\" (UID: \"1da3dc31-b98c-4d11-8837-96fe5c7d8398\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.188110 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1da3dc31-b98c-4d11-8837-96fe5c7d8398" (UID: "1da3dc31-b98c-4d11-8837-96fe5c7d8398"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.192952 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47" (OuterVolumeSpecName: "kube-api-access-6dp47") pod "1da3dc31-b98c-4d11-8837-96fe5c7d8398" (UID: "1da3dc31-b98c-4d11-8837-96fe5c7d8398"). InnerVolumeSpecName "kube-api-access-6dp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.288959 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dp47\" (UniqueName: \"kubernetes.io/projected/1da3dc31-b98c-4d11-8837-96fe5c7d8398-kube-api-access-6dp47\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.289248 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da3dc31-b98c-4d11-8837-96fe5c7d8398-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.368408 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.377556 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.405512 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.418253 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.499340 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crprl\" (UniqueName: \"kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl\") pod \"bab5d828-1730-4e36-a0a4-57704e03f6d9\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.499461 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7jd\" (UniqueName: \"kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd\") pod \"6b313e17-3867-49ca-81b4-a35f89dd5b12\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.499487 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts\") pod \"bab5d828-1730-4e36-a0a4-57704e03f6d9\" (UID: \"bab5d828-1730-4e36-a0a4-57704e03f6d9\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.499644 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts\") pod \"6b313e17-3867-49ca-81b4-a35f89dd5b12\" (UID: \"6b313e17-3867-49ca-81b4-a35f89dd5b12\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.501634 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b313e17-3867-49ca-81b4-a35f89dd5b12" (UID: "6b313e17-3867-49ca-81b4-a35f89dd5b12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.502083 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bab5d828-1730-4e36-a0a4-57704e03f6d9" (UID: "bab5d828-1730-4e36-a0a4-57704e03f6d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.505141 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl" (OuterVolumeSpecName: "kube-api-access-crprl") pod "bab5d828-1730-4e36-a0a4-57704e03f6d9" (UID: "bab5d828-1730-4e36-a0a4-57704e03f6d9"). InnerVolumeSpecName "kube-api-access-crprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.507188 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd" (OuterVolumeSpecName: "kube-api-access-hq7jd") pod "6b313e17-3867-49ca-81b4-a35f89dd5b12" (UID: "6b313e17-3867-49ca-81b4-a35f89dd5b12"). InnerVolumeSpecName "kube-api-access-hq7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.578802 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t9srg" event={"ID":"1da3dc31-b98c-4d11-8837-96fe5c7d8398","Type":"ContainerDied","Data":"be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.578855 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3fb32c16f218439ae853917d9818b236092b58ac0de224a83f2b397f0e0d97" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.578928 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t9srg" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.591814 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-af02-account-create-update-bxg27" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.592716 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-af02-account-create-update-bxg27" event={"ID":"fe1c4712-6135-41e6-9535-569379422bd7","Type":"ContainerDied","Data":"ef85978c59000152fe6a6a941d62c07b749dd6096a7d1db53ff0bb5bb77195a6"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.592784 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef85978c59000152fe6a6a941d62c07b749dd6096a7d1db53ff0bb5bb77195a6" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.601960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt76p\" (UniqueName: \"kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p\") pod \"fe1c4712-6135-41e6-9535-569379422bd7\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.602637 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts\") pod \"01b0f817-e54c-4f5a-89fb-026c01540ea8\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.602888 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts\") pod \"fe1c4712-6135-41e6-9535-569379422bd7\" (UID: \"fe1c4712-6135-41e6-9535-569379422bd7\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.602946 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fbf8-account-create-update-hvlxn" event={"ID":"6b313e17-3867-49ca-81b4-a35f89dd5b12","Type":"ContainerDied","Data":"9c36142a9c12005934e1c0f84c9ea24195b92e778d8efea23559f2647e34e681"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.607711 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c36142a9c12005934e1c0f84c9ea24195b92e778d8efea23559f2647e34e681" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.603196 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fbf8-account-create-update-hvlxn" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.608044 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bkv\" (UniqueName: \"kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv\") pod \"01b0f817-e54c-4f5a-89fb-026c01540ea8\" (UID: \"01b0f817-e54c-4f5a-89fb-026c01540ea8\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.603779 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01b0f817-e54c-4f5a-89fb-026c01540ea8" (UID: "01b0f817-e54c-4f5a-89fb-026c01540ea8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.604334 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe1c4712-6135-41e6-9535-569379422bd7" (UID: "fe1c4712-6135-41e6-9535-569379422bd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.607213 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p" (OuterVolumeSpecName: "kube-api-access-mt76p") pod "fe1c4712-6135-41e6-9535-569379422bd7" (UID: "fe1c4712-6135-41e6-9535-569379422bd7"). InnerVolumeSpecName "kube-api-access-mt76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.610311 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b313e17-3867-49ca-81b4-a35f89dd5b12-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.610418 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crprl\" (UniqueName: \"kubernetes.io/projected/bab5d828-1730-4e36-a0a4-57704e03f6d9-kube-api-access-crprl\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.610436 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7jd\" (UniqueName: \"kubernetes.io/projected/6b313e17-3867-49ca-81b4-a35f89dd5b12-kube-api-access-hq7jd\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.610449 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bab5d828-1730-4e36-a0a4-57704e03f6d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.615776 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2hgxn" event={"ID":"01b0f817-e54c-4f5a-89fb-026c01540ea8","Type":"ContainerDied","Data":"cf1884ca331b5db89f139e1c4439ae8a8c8df953c187e2e05ba38d28bbc67e4a"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.615829 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf1884ca331b5db89f139e1c4439ae8a8c8df953c187e2e05ba38d28bbc67e4a" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.615899 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2hgxn" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.617148 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv" (OuterVolumeSpecName: "kube-api-access-p7bkv") pod "01b0f817-e54c-4f5a-89fb-026c01540ea8" (UID: "01b0f817-e54c-4f5a-89fb-026c01540ea8"). InnerVolumeSpecName "kube-api-access-p7bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.621155 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cf79-account-create-update-qkb2x" event={"ID":"bab5d828-1730-4e36-a0a4-57704e03f6d9","Type":"ContainerDied","Data":"b0283c3cc66b05044362b5d05fb6d4f4706a4a7aee0679960d3d87f77e25643a"} Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.621185 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0283c3cc66b05044362b5d05fb6d4f4706a4a7aee0679960d3d87f77e25643a" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.621194 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cf79-account-create-update-qkb2x" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.622003 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.655229 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.678940 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711068 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts\") pod \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711148 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znf5v\" (UniqueName: \"kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v\") pod \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\" (UID: \"88b3029f-dd60-425d-b002-6f1b9a6af1b2\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711223 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts\") pod \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711269 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts\") pod \"d01853ca-d154-4247-b6b5-d0af7407921d\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711309 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmlv2\" (UniqueName: \"kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2\") pod \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\" (UID: \"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711341 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rrt\" (UniqueName: \"kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt\") pod \"d01853ca-d154-4247-b6b5-d0af7407921d\" (UID: \"d01853ca-d154-4247-b6b5-d0af7407921d\") " Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711731 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt76p\" (UniqueName: \"kubernetes.io/projected/fe1c4712-6135-41e6-9535-569379422bd7-kube-api-access-mt76p\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711778 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0f817-e54c-4f5a-89fb-026c01540ea8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711791 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe1c4712-6135-41e6-9535-569379422bd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.711802 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bkv\" (UniqueName: \"kubernetes.io/projected/01b0f817-e54c-4f5a-89fb-026c01540ea8-kube-api-access-p7bkv\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.712845 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d01853ca-d154-4247-b6b5-d0af7407921d" (UID: "d01853ca-d154-4247-b6b5-d0af7407921d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.713010 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88b3029f-dd60-425d-b002-6f1b9a6af1b2" (UID: "88b3029f-dd60-425d-b002-6f1b9a6af1b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.713452 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" (UID: "a3ed0c05-eea8-4b99-80bc-f4cee9075f8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.719500 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt" (OuterVolumeSpecName: "kube-api-access-v8rrt") pod "d01853ca-d154-4247-b6b5-d0af7407921d" (UID: "d01853ca-d154-4247-b6b5-d0af7407921d"). InnerVolumeSpecName "kube-api-access-v8rrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.719567 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2" (OuterVolumeSpecName: "kube-api-access-jmlv2") pod "a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" (UID: "a3ed0c05-eea8-4b99-80bc-f4cee9075f8a"). InnerVolumeSpecName "kube-api-access-jmlv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.720785 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v" (OuterVolumeSpecName: "kube-api-access-znf5v") pod "88b3029f-dd60-425d-b002-6f1b9a6af1b2" (UID: "88b3029f-dd60-425d-b002-6f1b9a6af1b2"). InnerVolumeSpecName "kube-api-access-znf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812883 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znf5v\" (UniqueName: \"kubernetes.io/projected/88b3029f-dd60-425d-b002-6f1b9a6af1b2-kube-api-access-znf5v\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812920 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812929 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d01853ca-d154-4247-b6b5-d0af7407921d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812940 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmlv2\" (UniqueName: \"kubernetes.io/projected/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a-kube-api-access-jmlv2\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812950 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rrt\" (UniqueName: \"kubernetes.io/projected/d01853ca-d154-4247-b6b5-d0af7407921d-kube-api-access-v8rrt\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:48 crc kubenswrapper[4857]: I0318 14:25:48.812958 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88b3029f-dd60-425d-b002-6f1b9a6af1b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.632955 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sclfz" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.632962 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sclfz" event={"ID":"d01853ca-d154-4247-b6b5-d0af7407921d","Type":"ContainerDied","Data":"5cde3c9ca11a1d8a8138d6dc3ea2a757685e8d612a1a475d1f632e32b28e7d6b"} Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.633734 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cde3c9ca11a1d8a8138d6dc3ea2a757685e8d612a1a475d1f632e32b28e7d6b" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.641082 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c8wq4" event={"ID":"a3ed0c05-eea8-4b99-80bc-f4cee9075f8a","Type":"ContainerDied","Data":"846f8f74499c5bd736571da677b7e6a970da24ff92e340084bf97839964a70e1"} Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.641125 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846f8f74499c5bd736571da677b7e6a970da24ff92e340084bf97839964a70e1" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.641196 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c8wq4" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.668328 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerStarted","Data":"e01c807e426a367deb3a1e94d26f9f1c3255210cd7514ba39266eefe3cb854b1"} Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.671308 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a01-account-create-update-bx5tc" event={"ID":"88b3029f-dd60-425d-b002-6f1b9a6af1b2","Type":"ContainerDied","Data":"1b3a43e5aa6de10987da7ff94504be253a93a696a523305d3b8092afe6ecf4d9"} Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.671344 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3a43e5aa6de10987da7ff94504be253a93a696a523305d3b8092afe6ecf4d9" Mar 18 14:25:49 crc kubenswrapper[4857]: I0318 14:25:49.671411 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a01-account-create-update-bx5tc" Mar 18 14:25:50 crc kubenswrapper[4857]: I0318 14:25:50.692584 4857 generic.go:334] "Generic (PLEG): container finished" podID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerID="e01c807e426a367deb3a1e94d26f9f1c3255210cd7514ba39266eefe3cb854b1" exitCode=0 Mar 18 14:25:50 crc kubenswrapper[4857]: I0318 14:25:50.692636 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerDied","Data":"e01c807e426a367deb3a1e94d26f9f1c3255210cd7514ba39266eefe3cb854b1"} Mar 18 14:25:52 crc kubenswrapper[4857]: I0318 14:25:52.726998 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kj5zp" event={"ID":"db23dd3d-8bc7-41ba-9e68-888a9ddb984a","Type":"ContainerDied","Data":"6d587580fb0096e6795bce2b9720b3097b84311499130470fc770f7887ce7f7c"} Mar 18 14:25:52 crc kubenswrapper[4857]: I0318 14:25:52.726771 4857 generic.go:334] "Generic (PLEG): container finished" podID="db23dd3d-8bc7-41ba-9e68-888a9ddb984a" containerID="6d587580fb0096e6795bce2b9720b3097b84311499130470fc770f7887ce7f7c" exitCode=0 Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.766811 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"fa061f576ba5cc2b76b6006fa4602cba867f150149cf4edcd066121647c8b2bd"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.767271 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"fa50c645e86156b39c7de4a3349f737b36f9070a3fa5f742e07bdb04693d075b"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.767308 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"c88f40b093b28fee4cc3c2641029d3684f341e02b367e24d56d1d0f05cdf7f77"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.767318 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"49a847e1cf5a1dd4c0c59b773021550650002c80bd7f09abb1a61254a5ccf275"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.769761 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerStarted","Data":"6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.773879 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v2q2q" event={"ID":"ec10534a-1292-409a-adff-ecfac639275f","Type":"ContainerStarted","Data":"48a200e6e484cdb5f74dac7ea160ebb3a82f5f2a2addf8dee193fc6c2f3d7ebd"} Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.825221 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b98g7" podStartSLOduration=4.762270861 podStartE2EDuration="11.825194347s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="2026-03-18 14:25:45.793879359 +0000 UTC m=+1529.923007816" lastFinishedPulling="2026-03-18 14:25:52.856802845 +0000 UTC m=+1536.985931302" observedRunningTime="2026-03-18 14:25:53.822347446 +0000 UTC m=+1537.951475933" watchObservedRunningTime="2026-03-18 14:25:53.825194347 +0000 UTC m=+1537.954322814" Mar 18 14:25:53 crc kubenswrapper[4857]: I0318 14:25:53.874467 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-v2q2q" podStartSLOduration=4.390142629 podStartE2EDuration="11.874441854s" podCreationTimestamp="2026-03-18 14:25:42 +0000 UTC" firstStartedPulling="2026-03-18 14:25:45.030494863 +0000 UTC m=+1529.159623320" lastFinishedPulling="2026-03-18 14:25:52.514794088 +0000 UTC m=+1536.643922545" observedRunningTime="2026-03-18 14:25:53.850912803 +0000 UTC m=+1537.980041260" watchObservedRunningTime="2026-03-18 14:25:53.874441854 +0000 UTC m=+1538.003570311" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.395988 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.563811 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29csh\" (UniqueName: \"kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh\") pod \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.564264 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data\") pod \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.564461 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data\") pod \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.564657 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle\") pod \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\" (UID: \"db23dd3d-8bc7-41ba-9e68-888a9ddb984a\") " Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.574444 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "db23dd3d-8bc7-41ba-9e68-888a9ddb984a" (UID: "db23dd3d-8bc7-41ba-9e68-888a9ddb984a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.574691 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh" (OuterVolumeSpecName: "kube-api-access-29csh") pod "db23dd3d-8bc7-41ba-9e68-888a9ddb984a" (UID: "db23dd3d-8bc7-41ba-9e68-888a9ddb984a"). InnerVolumeSpecName "kube-api-access-29csh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.617509 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db23dd3d-8bc7-41ba-9e68-888a9ddb984a" (UID: "db23dd3d-8bc7-41ba-9e68-888a9ddb984a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.635132 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data" (OuterVolumeSpecName: "config-data") pod "db23dd3d-8bc7-41ba-9e68-888a9ddb984a" (UID: "db23dd3d-8bc7-41ba-9e68-888a9ddb984a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.668123 4857 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.668158 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.668167 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.668176 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29csh\" (UniqueName: \"kubernetes.io/projected/db23dd3d-8bc7-41ba-9e68-888a9ddb984a-kube-api-access-29csh\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.797839 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kj5zp" event={"ID":"db23dd3d-8bc7-41ba-9e68-888a9ddb984a","Type":"ContainerDied","Data":"53c3d20f87b254ebbfbcf8d362b80541311a7afe3c3ac455338f7bd0c3213558"} Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.798188 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c3d20f87b254ebbfbcf8d362b80541311a7afe3c3ac455338f7bd0c3213558" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.798317 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kj5zp" Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.828796 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"cbd6aaf266855d322d944ad03938a9c956692a48c20b81e6883531069c5fccd1"} Mar 18 14:25:54 crc kubenswrapper[4857]: I0318 14:25:54.828842 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"ee8ec475c66de431a4e57666a124e637ea285f51dfa5db08e18625d38d1efe95"} Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.297628 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298376 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da3dc31-b98c-4d11-8837-96fe5c7d8398" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298395 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da3dc31-b98c-4d11-8837-96fe5c7d8398" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298409 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01853ca-d154-4247-b6b5-d0af7407921d" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298415 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01853ca-d154-4247-b6b5-d0af7407921d" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298438 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db23dd3d-8bc7-41ba-9e68-888a9ddb984a" containerName="glance-db-sync" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298444 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="db23dd3d-8bc7-41ba-9e68-888a9ddb984a" containerName="glance-db-sync" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298455 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b0f817-e54c-4f5a-89fb-026c01540ea8" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298461 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b0f817-e54c-4f5a-89fb-026c01540ea8" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298471 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298477 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298487 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab5d828-1730-4e36-a0a4-57704e03f6d9" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298494 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab5d828-1730-4e36-a0a4-57704e03f6d9" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298506 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b313e17-3867-49ca-81b4-a35f89dd5b12" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298512 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b313e17-3867-49ca-81b4-a35f89dd5b12" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298526 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1c4712-6135-41e6-9535-569379422bd7" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298532 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1c4712-6135-41e6-9535-569379422bd7" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: E0318 14:25:55.298548 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b3029f-dd60-425d-b002-6f1b9a6af1b2" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298574 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b3029f-dd60-425d-b002-6f1b9a6af1b2" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298795 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da3dc31-b98c-4d11-8837-96fe5c7d8398" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298811 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1c4712-6135-41e6-9535-569379422bd7" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298824 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01853ca-d154-4247-b6b5-d0af7407921d" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298842 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab5d828-1730-4e36-a0a4-57704e03f6d9" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298849 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b0f817-e54c-4f5a-89fb-026c01540ea8" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298858 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b3029f-dd60-425d-b002-6f1b9a6af1b2" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298869 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" containerName="mariadb-database-create" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298879 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="db23dd3d-8bc7-41ba-9e68-888a9ddb984a" containerName="glance-db-sync" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.298890 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b313e17-3867-49ca-81b4-a35f89dd5b12" containerName="mariadb-account-create-update" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.311148 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.324903 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.494082 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhw5\" (UniqueName: \"kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.494218 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.494264 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.494340 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.494522 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.597044 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.597206 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.597289 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhw5\" (UniqueName: \"kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.597360 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.597396 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.598057 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.598059 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.598278 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.598279 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.615020 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhw5\" (UniqueName: \"kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5\") pod \"dnsmasq-dns-5b946c75cc-c5rsk\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.630628 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.635052 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.640193 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.856177 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1ca61c04-f56b-42c4-99fe-daa7f80436f7","Type":"ContainerStarted","Data":"0efbbbcac9a4e943d475109ae888f563f06f2ae3d183a202c596dbbd789590d1"} Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.862785 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 18 14:25:55 crc kubenswrapper[4857]: I0318 14:25:55.917138 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.149590064 podStartE2EDuration="1m1.917113828s" podCreationTimestamp="2026-03-18 14:24:54 +0000 UTC" firstStartedPulling="2026-03-18 14:25:29.745281874 +0000 UTC m=+1513.874410341" lastFinishedPulling="2026-03-18 14:25:52.512805648 +0000 UTC m=+1536.641934105" observedRunningTime="2026-03-18 14:25:55.901433765 +0000 UTC m=+1540.030562222" watchObservedRunningTime="2026-03-18 14:25:55.917113828 +0000 UTC m=+1540.046242285" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.225182 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:56 crc kubenswrapper[4857]: W0318 14:25:56.229181 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fbbe403_332c_403c_a528_fe42fc3fe32b.slice/crio-f5d518304a718057710366f261cd2eae823d4f84242cf0db5850ac699b6a6a96 WatchSource:0}: Error finding container f5d518304a718057710366f261cd2eae823d4f84242cf0db5850ac699b6a6a96: Status 404 returned error can't find the container with id f5d518304a718057710366f261cd2eae823d4f84242cf0db5850ac699b6a6a96 Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.353128 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.386475 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.390147 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.393005 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.439063 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551284 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551397 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551501 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551641 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9p2\" (UniqueName: \"kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551677 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.551846 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.653994 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.654035 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.654094 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9p2\" (UniqueName: \"kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.654117 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.654226 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.654337 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.655476 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.655557 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.655557 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.655716 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.655725 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.674320 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9p2\" (UniqueName: \"kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2\") pod \"dnsmasq-dns-74f6bcbc87-pbgw2\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.730500 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.877719 4857 generic.go:334] "Generic (PLEG): container finished" podID="8fbbe403-332c-403c-a528-fe42fc3fe32b" containerID="4f1321397524fd7e99bb7e766801d4885357774fc098632273c442b1d6bfdafa" exitCode=0 Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.878990 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" event={"ID":"8fbbe403-332c-403c-a528-fe42fc3fe32b","Type":"ContainerDied","Data":"4f1321397524fd7e99bb7e766801d4885357774fc098632273c442b1d6bfdafa"} Mar 18 14:25:56 crc kubenswrapper[4857]: I0318 14:25:56.879027 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" event={"ID":"8fbbe403-332c-403c-a528-fe42fc3fe32b","Type":"ContainerStarted","Data":"f5d518304a718057710366f261cd2eae823d4f84242cf0db5850ac699b6a6a96"} Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.324050 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.449990 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.484525 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc\") pod \"8fbbe403-332c-403c-a528-fe42fc3fe32b\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.484645 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb\") pod \"8fbbe403-332c-403c-a528-fe42fc3fe32b\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.484691 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config\") pod \"8fbbe403-332c-403c-a528-fe42fc3fe32b\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.485012 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb\") pod \"8fbbe403-332c-403c-a528-fe42fc3fe32b\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.485094 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvhw5\" (UniqueName: \"kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5\") pod \"8fbbe403-332c-403c-a528-fe42fc3fe32b\" (UID: \"8fbbe403-332c-403c-a528-fe42fc3fe32b\") " Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.508963 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5" (OuterVolumeSpecName: "kube-api-access-lvhw5") pod "8fbbe403-332c-403c-a528-fe42fc3fe32b" (UID: "8fbbe403-332c-403c-a528-fe42fc3fe32b"). InnerVolumeSpecName "kube-api-access-lvhw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.532411 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fbbe403-332c-403c-a528-fe42fc3fe32b" (UID: "8fbbe403-332c-403c-a528-fe42fc3fe32b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.536368 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fbbe403-332c-403c-a528-fe42fc3fe32b" (UID: "8fbbe403-332c-403c-a528-fe42fc3fe32b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.540292 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fbbe403-332c-403c-a528-fe42fc3fe32b" (UID: "8fbbe403-332c-403c-a528-fe42fc3fe32b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.590294 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.590334 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvhw5\" (UniqueName: \"kubernetes.io/projected/8fbbe403-332c-403c-a528-fe42fc3fe32b-kube-api-access-lvhw5\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.590345 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.590355 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.600064 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config" (OuterVolumeSpecName: "config") pod "8fbbe403-332c-403c-a528-fe42fc3fe32b" (UID: "8fbbe403-332c-403c-a528-fe42fc3fe32b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.692157 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbe403-332c-403c-a528-fe42fc3fe32b-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.892883 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" event={"ID":"8fbbe403-332c-403c-a528-fe42fc3fe32b","Type":"ContainerDied","Data":"f5d518304a718057710366f261cd2eae823d4f84242cf0db5850ac699b6a6a96"} Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.892943 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-c5rsk" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.892960 4857 scope.go:117] "RemoveContainer" containerID="4f1321397524fd7e99bb7e766801d4885357774fc098632273c442b1d6bfdafa" Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.895731 4857 generic.go:334] "Generic (PLEG): container finished" podID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerID="9fcbaa69ebdd260ce6ee91ef9c08370d43307cd65a1add7b8d0bf0f328d7d16b" exitCode=0 Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.895809 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" event={"ID":"c6f38bca-cc01-4f27-b0b8-9ba8f1743506","Type":"ContainerDied","Data":"9fcbaa69ebdd260ce6ee91ef9c08370d43307cd65a1add7b8d0bf0f328d7d16b"} Mar 18 14:25:57 crc kubenswrapper[4857]: I0318 14:25:57.895847 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" event={"ID":"c6f38bca-cc01-4f27-b0b8-9ba8f1743506","Type":"ContainerStarted","Data":"b0bf2aed185c9270e7efc2b317995b7116ea520e724a8a1d56dd170b3814957f"} Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.059499 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.095263 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-c5rsk"] Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.908899 4857 generic.go:334] "Generic (PLEG): container finished" podID="ec10534a-1292-409a-adff-ecfac639275f" containerID="48a200e6e484cdb5f74dac7ea160ebb3a82f5f2a2addf8dee193fc6c2f3d7ebd" exitCode=0 Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.909121 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v2q2q" event={"ID":"ec10534a-1292-409a-adff-ecfac639275f","Type":"ContainerDied","Data":"48a200e6e484cdb5f74dac7ea160ebb3a82f5f2a2addf8dee193fc6c2f3d7ebd"} Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.912516 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" event={"ID":"c6f38bca-cc01-4f27-b0b8-9ba8f1743506","Type":"ContainerStarted","Data":"eda86b486364a69205f4f18182f567e0ff9fff61735ecb699fa126278957b608"} Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.913733 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:25:58 crc kubenswrapper[4857]: I0318 14:25:58.960118 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" podStartSLOduration=2.9600941670000003 podStartE2EDuration="2.960094167s" podCreationTimestamp="2026-03-18 14:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:25:58.95264131 +0000 UTC m=+1543.081769767" watchObservedRunningTime="2026-03-18 14:25:58.960094167 +0000 UTC m=+1543.089222624" Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.182303 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbbe403-332c-403c-a528-fe42fc3fe32b" path="/var/lib/kubelet/pods/8fbbe403-332c-403c-a528-fe42fc3fe32b/volumes" Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.543793 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.544414 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="prometheus" containerID="cri-o://01ed5722c6df22f4aa39b5d2eb9604db9e7ad9e1bbcbe8a5cef1e369f2c7cb15" gracePeriod=600 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.544517 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="thanos-sidecar" containerID="cri-o://4502bc50598283ef9116fbb9bfb782d2bfae62b49a1b0440934ae48f1c96f622" gracePeriod=600 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.544617 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="config-reloader" containerID="cri-o://a045dbbd00f6405c8f0f5ed58b21fded3543087ff5b8036556513c8bb5e9662a" gracePeriod=600 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928589 4857 generic.go:334] "Generic (PLEG): container finished" podID="a61234af-d85a-4afc-ad53-ed997001f645" containerID="4502bc50598283ef9116fbb9bfb782d2bfae62b49a1b0440934ae48f1c96f622" exitCode=0 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928632 4857 generic.go:334] "Generic (PLEG): container finished" podID="a61234af-d85a-4afc-ad53-ed997001f645" containerID="a045dbbd00f6405c8f0f5ed58b21fded3543087ff5b8036556513c8bb5e9662a" exitCode=0 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928642 4857 generic.go:334] "Generic (PLEG): container finished" podID="a61234af-d85a-4afc-ad53-ed997001f645" containerID="01ed5722c6df22f4aa39b5d2eb9604db9e7ad9e1bbcbe8a5cef1e369f2c7cb15" exitCode=0 Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928860 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerDied","Data":"4502bc50598283ef9116fbb9bfb782d2bfae62b49a1b0440934ae48f1c96f622"} Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928902 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerDied","Data":"a045dbbd00f6405c8f0f5ed58b21fded3543087ff5b8036556513c8bb5e9662a"} Mar 18 14:25:59 crc kubenswrapper[4857]: I0318 14:25:59.928918 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerDied","Data":"01ed5722c6df22f4aa39b5d2eb9604db9e7ad9e1bbcbe8a5cef1e369f2c7cb15"} Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.148914 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564066-5lpxl"] Mar 18 14:26:00 crc kubenswrapper[4857]: E0318 14:26:00.149379 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbbe403-332c-403c-a528-fe42fc3fe32b" containerName="init" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.149395 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbbe403-332c-403c-a528-fe42fc3fe32b" containerName="init" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.149642 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbbe403-332c-403c-a528-fe42fc3fe32b" containerName="init" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.150384 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.159010 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.160603 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564066-5lpxl"] Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.162332 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.162697 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.249482 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrf9\" (UniqueName: \"kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9\") pod \"auto-csr-approver-29564066-5lpxl\" (UID: \"ab24ef5b-3d16-4324-93e3-8e127478a489\") " pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.352555 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrf9\" (UniqueName: \"kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9\") pod \"auto-csr-approver-29564066-5lpxl\" (UID: \"ab24ef5b-3d16-4324-93e3-8e127478a489\") " pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.390416 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrf9\" (UniqueName: \"kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9\") pod \"auto-csr-approver-29564066-5lpxl\" (UID: \"ab24ef5b-3d16-4324-93e3-8e127478a489\") " pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.476371 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.519397 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.524011 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557541 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557608 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557651 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557793 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77w4\" (UniqueName: \"kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4\") pod \"ec10534a-1292-409a-adff-ecfac639275f\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557838 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557869 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.557928 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558010 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558042 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data\") pod \"ec10534a-1292-409a-adff-ecfac639275f\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558110 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558163 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfswl\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558316 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle\") pod \"ec10534a-1292-409a-adff-ecfac639275f\" (UID: \"ec10534a-1292-409a-adff-ecfac639275f\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558370 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file\") pod \"a61234af-d85a-4afc-ad53-ed997001f645\" (UID: \"a61234af-d85a-4afc-ad53-ed997001f645\") " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.558542 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.559278 4857 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.566663 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4" (OuterVolumeSpecName: "kube-api-access-x77w4") pod "ec10534a-1292-409a-adff-ecfac639275f" (UID: "ec10534a-1292-409a-adff-ecfac639275f"). InnerVolumeSpecName "kube-api-access-x77w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.567152 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.568257 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.587152 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config" (OuterVolumeSpecName: "config") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.587630 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.603688 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl" (OuterVolumeSpecName: "kube-api-access-tfswl") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "kube-api-access-tfswl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.607604 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out" (OuterVolumeSpecName: "config-out") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.612206 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.655895 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec10534a-1292-409a-adff-ecfac639275f" (UID: "ec10534a-1292-409a-adff-ecfac639275f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662208 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "pvc-155423eb-758a-4e2b-8105-8cd95f837e8e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662552 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662590 4857 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662627 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") on node \"crc\" " Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662643 4857 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a61234af-d85a-4afc-ad53-ed997001f645-config-out\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662654 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77w4\" (UniqueName: \"kubernetes.io/projected/ec10534a-1292-409a-adff-ecfac639275f-kube-api-access-x77w4\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662672 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662682 4857 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662691 4857 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662702 4857 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a61234af-d85a-4afc-ad53-ed997001f645-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.662725 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfswl\" (UniqueName: \"kubernetes.io/projected/a61234af-d85a-4afc-ad53-ed997001f645-kube-api-access-tfswl\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.673509 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config" (OuterVolumeSpecName: "web-config") pod "a61234af-d85a-4afc-ad53-ed997001f645" (UID: "a61234af-d85a-4afc-ad53-ed997001f645"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.726561 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.726896 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-155423eb-758a-4e2b-8105-8cd95f837e8e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e") on node "crc" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.765806 4857 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a61234af-d85a-4afc-ad53-ed997001f645-web-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.765869 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.772027 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data" (OuterVolumeSpecName: "config-data") pod "ec10534a-1292-409a-adff-ecfac639275f" (UID: "ec10534a-1292-409a-adff-ecfac639275f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.868641 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec10534a-1292-409a-adff-ecfac639275f-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.940671 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-v2q2q" event={"ID":"ec10534a-1292-409a-adff-ecfac639275f","Type":"ContainerDied","Data":"dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1"} Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.940732 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd733ff97edc0c3b7afbce8d7af8a4d338527e30b817b285e9f87570200405b1" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.940839 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-v2q2q" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.960022 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.960722 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a61234af-d85a-4afc-ad53-ed997001f645","Type":"ContainerDied","Data":"71eacd139bf133b4eb7195a232d8c32154193a16c8f8f51dd2aff958a8ef0f8c"} Mar 18 14:26:00 crc kubenswrapper[4857]: I0318 14:26:00.960786 4857 scope.go:117] "RemoveContainer" containerID="4502bc50598283ef9116fbb9bfb782d2bfae62b49a1b0440934ae48f1c96f622" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.000120 4857 scope.go:117] "RemoveContainer" containerID="a045dbbd00f6405c8f0f5ed58b21fded3543087ff5b8036556513c8bb5e9662a" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.034079 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.051858 4857 scope.go:117] "RemoveContainer" containerID="01ed5722c6df22f4aa39b5d2eb9604db9e7ad9e1bbcbe8a5cef1e369f2c7cb15" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.055718 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.084767 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:26:01 crc kubenswrapper[4857]: E0318 14:26:01.085589 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="init-config-reloader" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085603 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="init-config-reloader" Mar 18 14:26:01 crc kubenswrapper[4857]: E0318 14:26:01.085616 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="prometheus" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085622 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="prometheus" Mar 18 14:26:01 crc kubenswrapper[4857]: E0318 14:26:01.085636 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="config-reloader" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085642 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="config-reloader" Mar 18 14:26:01 crc kubenswrapper[4857]: E0318 14:26:01.085652 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10534a-1292-409a-adff-ecfac639275f" containerName="keystone-db-sync" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085658 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10534a-1292-409a-adff-ecfac639275f" containerName="keystone-db-sync" Mar 18 14:26:01 crc kubenswrapper[4857]: E0318 14:26:01.085666 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="thanos-sidecar" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085672 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="thanos-sidecar" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085886 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec10534a-1292-409a-adff-ecfac639275f" containerName="keystone-db-sync" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085907 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="thanos-sidecar" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085938 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="prometheus" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.085950 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61234af-d85a-4afc-ad53-ed997001f645" containerName="config-reloader" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.088841 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.091791 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.092349 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.092732 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.092928 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.093118 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pvvpn" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.096807 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.097080 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.105015 4857 scope.go:117] "RemoveContainer" containerID="2c7382308832b285f0127b2fb40e1de03d1be2ba2f0549232624b720577301f4" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.105792 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.106285 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.142909 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564066-5lpxl"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.158345 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178641 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178727 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178766 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178817 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178836 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178891 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178911 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178973 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t9v\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-kube-api-access-d6t9v\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.178995 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.179030 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.179057 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.179079 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.179126 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.209539 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61234af-d85a-4afc-ad53-ed997001f645" path="/var/lib/kubelet/pods/a61234af-d85a-4afc-ad53-ed997001f645/volumes" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281739 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t9v\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-kube-api-access-d6t9v\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281817 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281867 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281894 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281915 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.281981 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282018 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282072 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282091 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282110 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282128 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282184 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.282210 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.284622 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.290987 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.295988 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.296552 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/117d706b-860f-4f17-8f2b-5d27b7cdfe61-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.299351 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.309294 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ttvd9"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.314994 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.320785 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.320844 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/375e81b5ff671f5b992332946377b9ca3c84314088961a63afa1082ad97c465d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.324275 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-config\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.326560 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.331959 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.332406 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.332910 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.334523 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/117d706b-860f-4f17-8f2b-5d27b7cdfe61-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.345617 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t9v\" (UniqueName: \"kubernetes.io/projected/117d706b-860f-4f17-8f2b-5d27b7cdfe61-kube-api-access-d6t9v\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.356953 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.357977 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.362557 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.365407 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4kgzh" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.365624 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392182 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqb42\" (UniqueName: \"kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392250 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392335 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392427 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392468 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.392500 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.414363 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ttvd9"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.453571 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501152 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501217 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501259 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501330 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqb42\" (UniqueName: \"kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501361 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.501431 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.505011 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-4sc5j"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.506913 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.517346 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.519234 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.522595 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.526466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.534800 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.535430 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-h4jlb" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.535799 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.577867 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4sc5j"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.621703 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.621964 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.622274 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72bpz\" (UniqueName: \"kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.624656 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-155423eb-758a-4e2b-8105-8cd95f837e8e\") pod \"prometheus-metric-storage-0\" (UID: \"117d706b-860f-4f17-8f2b-5d27b7cdfe61\") " pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.631528 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqb42\" (UniqueName: \"kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42\") pod \"keystone-bootstrap-ttvd9\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.684391 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.687684 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.688148 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.755931 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72bpz\" (UniqueName: \"kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.756342 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.756428 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.763923 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.774027 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.789989 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.813593 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.833535 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72bpz\" (UniqueName: \"kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz\") pod \"heat-db-sync-4sc5j\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.864191 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4sc5j" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885317 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885408 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885435 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885463 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885540 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.885565 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmkv\" (UniqueName: \"kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.937148 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nmg7v"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.946334 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.952069 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.952382 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pf4wf" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.952532 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.963281 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmg7v"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.990461 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sbw4r"] Mar 18 14:26:01 crc kubenswrapper[4857]: I0318 14:26:01.992386 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002317 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002373 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002410 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002435 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfsb\" (UniqueName: \"kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002466 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002504 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002584 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002606 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmkv\" (UniqueName: \"kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002674 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002758 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002721 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003045 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4xq55" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.002804 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003298 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003361 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvls\" (UniqueName: \"kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003437 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.003714 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.004697 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.027173 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.030843 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.031553 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.036738 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.065865 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sbw4r"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.112574 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvls\" (UniqueName: \"kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.112661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.112737 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.112771 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfsb\" (UniqueName: \"kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.112849 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.113130 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.113307 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.113338 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.113367 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.165314 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="dnsmasq-dns" containerID="cri-o://eda86b486364a69205f4f18182f567e0ff9fff61735ecb699fa126278957b608" gracePeriod=10 Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.165803 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.165904 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" event={"ID":"ab24ef5b-3d16-4324-93e3-8e127478a489","Type":"ContainerStarted","Data":"f6d9c8d61ea4e8ca5d0af7d6933384ef0e6dd7fff9dbe3ec0d4583c049986350"} Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.165973 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tpllm"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.191370 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.195224 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.245498 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfsb\" (UniqueName: \"kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.246144 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle\") pod \"neutron-db-sync-sbw4r\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.250495 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.250608 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.224850 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.261789 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.264868 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273335 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273432 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvls\" (UniqueName: \"kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls\") pod \"cinder-db-sync-nmg7v\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273581 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273607 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273638 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273729 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.273805 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnwg2\" (UniqueName: \"kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.274001 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-skh4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.280367 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmkv\" (UniqueName: \"kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv\") pod \"dnsmasq-dns-847c4cc679-jf484\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.280985 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cxdpg"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.283385 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.299639 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.299949 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bhdh5" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.354400 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cxdpg"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.375863 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.375946 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.376015 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.380869 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnwg2\" (UniqueName: \"kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.381095 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.382184 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.389869 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.399671 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.410196 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tpllm"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.411568 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.436406 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnwg2\" (UniqueName: \"kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.461898 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.480586 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.488101 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts\") pod \"placement-db-sync-tpllm\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.490367 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.490429 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.491721 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpl2t\" (UniqueName: \"kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.492984 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.570870 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.573603 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.592198 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593371 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593463 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593463 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593536 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593588 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593628 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593811 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593848 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpw9\" (UniqueName: \"kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.593875 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl2t\" (UniqueName: \"kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.614266 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.615219 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tpllm" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.617014 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl2t\" (UniqueName: \"kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.622037 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle\") pod \"barbican-db-sync-cxdpg\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.634483 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.669127 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696066 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696112 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696146 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696271 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696299 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpw9\" (UniqueName: \"kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.696378 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.697316 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.697622 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.698277 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.700586 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.701476 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.703834 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.729360 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpw9\" (UniqueName: \"kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9\") pod \"dnsmasq-dns-785d8bcb8c-pgqhf\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.752260 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.752411 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.757057 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.757254 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.769846 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.781241 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.796235 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.797696 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.803027 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.803212 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.803627 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.803785 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fkqhd" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.803923 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zgx\" (UniqueName: \"kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.804113 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.804218 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.804285 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.804790 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.866861 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:02 crc kubenswrapper[4857]: W0318 14:26:02.891313 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode97fae11_ac4e_4ad1_a7fa_ca06ffbe69af.slice/crio-061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e WatchSource:0}: Error finding container 061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e: Status 404 returned error can't find the container with id 061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.920539 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.924918 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.928256 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 18 14:26:02 crc kubenswrapper[4857]: I0318 14:26:02.928361 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:02.939421 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:02.939515 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:02.940408 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:02.940633 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.119151 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.138143 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.138459 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.139090 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140178 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140268 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140331 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140512 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140546 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140656 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2hh\" (UniqueName: \"kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140858 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zgx\" (UniqueName: \"kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.140894 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.141035 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.141063 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.141094 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.154317 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.155264 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.161622 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.161728 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.184205 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zgx\" (UniqueName: \"kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx\") pod \"ceilometer-0\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.212196 4857 generic.go:334] "Generic (PLEG): container finished" podID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerID="eda86b486364a69205f4f18182f567e0ff9fff61735ecb699fa126278957b608" exitCode=0 Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.236500 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ttvd9"] Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.237428 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" event={"ID":"c6f38bca-cc01-4f27-b0b8-9ba8f1743506","Type":"ContainerDied","Data":"eda86b486364a69205f4f18182f567e0ff9fff61735ecb699fa126278957b608"} Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.237947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttvd9" event={"ID":"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af","Type":"ContainerStarted","Data":"061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e"} Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.240820 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243201 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243255 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243289 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243313 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243375 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2hh\" (UniqueName: \"kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243494 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243524 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9lk\" (UniqueName: \"kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243560 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243599 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243620 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243696 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243730 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243772 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243813 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.243860 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.246013 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.246358 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.254649 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.254698 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/53a11717549d4e5fa20456445f0a3110867e942e65caa41580a09c0ef37f0f67/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.255918 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.258182 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.259064 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.266456 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.284942 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2hh\" (UniqueName: \"kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.373620 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.373979 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.375365 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.375695 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz9lk\" (UniqueName: \"kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.375849 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.376091 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.376141 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.376217 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.378434 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.382030 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.385466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.385831 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.387008 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.387033 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce4ba88565bbf6eaffcfc19803bd3d9355ffb3d5f28210b890fc7555a4578986/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.396514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.402807 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.466017 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz9lk\" (UniqueName: \"kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.468720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.553999 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4sc5j"] Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.563034 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.649182 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.746842 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:03 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:03 crc kubenswrapper[4857]: > Mar 18 14:26:03 crc kubenswrapper[4857]: I0318 14:26:03.753028 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.155533 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.372194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4sc5j" event={"ID":"fd4c05d5-43c8-4aad-9052-a519d7c6d182","Type":"ContainerStarted","Data":"b2b3806152b06f5b37955f870076a24993763fd71f66464820164243ecd5d064"} Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.411107 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerStarted","Data":"5bfc602ea5ab1a884e13945e074532a8a57ba3b8d83d186dac35fb09b7cca64a"} Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.427243 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" event={"ID":"ab24ef5b-3d16-4324-93e3-8e127478a489","Type":"ContainerStarted","Data":"a6f24be92f9b3f98471fa568880e2b653734130042b51aab369f6306e6d36747"} Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.446469 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sbw4r"] Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.468666 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" podStartSLOduration=2.992346982 podStartE2EDuration="4.468640367s" podCreationTimestamp="2026-03-18 14:26:00 +0000 UTC" firstStartedPulling="2026-03-18 14:26:01.129211066 +0000 UTC m=+1545.258339523" lastFinishedPulling="2026-03-18 14:26:02.605504451 +0000 UTC m=+1546.734632908" observedRunningTime="2026-03-18 14:26:04.456234926 +0000 UTC m=+1548.585363383" watchObservedRunningTime="2026-03-18 14:26:04.468640367 +0000 UTC m=+1548.597768824" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.651305 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.729854 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.730182 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.730237 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.730304 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9p2\" (UniqueName: \"kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.730377 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.730432 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb\") pod \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\" (UID: \"c6f38bca-cc01-4f27-b0b8-9ba8f1743506\") " Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.791579 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2" (OuterVolumeSpecName: "kube-api-access-lt9p2") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "kube-api-access-lt9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.836738 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9p2\" (UniqueName: \"kubernetes.io/projected/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-kube-api-access-lt9p2\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.960793 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.968637 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config" (OuterVolumeSpecName: "config") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:04 crc kubenswrapper[4857]: I0318 14:26:04.982419 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.055168 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.055208 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.055218 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.090136 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.090820 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6f38bca-cc01-4f27-b0b8-9ba8f1743506" (UID: "c6f38bca-cc01-4f27-b0b8-9ba8f1743506"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.663842 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.663888 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6f38bca-cc01-4f27-b0b8-9ba8f1743506-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.825503 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttvd9" event={"ID":"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af","Type":"ContainerStarted","Data":"fce90338e24ea1f22326564a290a06286b541c35da0399701d3f9ea0f3146e6c"} Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.883023 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sbw4r" event={"ID":"2ea129bc-8d82-472e-8c4d-0f1b5e79078e","Type":"ContainerStarted","Data":"85139deb6db0388d399aff8c46b4c102d113b30c53475c95a94d9d61446f6658"} Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.915957 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmg7v"] Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.933922 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.934692 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-pbgw2" event={"ID":"c6f38bca-cc01-4f27-b0b8-9ba8f1743506","Type":"ContainerDied","Data":"b0bf2aed185c9270e7efc2b317995b7116ea520e724a8a1d56dd170b3814957f"} Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.934783 4857 scope.go:117] "RemoveContainer" containerID="eda86b486364a69205f4f18182f567e0ff9fff61735ecb699fa126278957b608" Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.948506 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:05 crc kubenswrapper[4857]: I0318 14:26:05.955398 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ttvd9" podStartSLOduration=4.955369744 podStartE2EDuration="4.955369744s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:26:05.863679992 +0000 UTC m=+1549.992808449" watchObservedRunningTime="2026-03-18 14:26:05.955369744 +0000 UTC m=+1550.084498201" Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.077603 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tpllm"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.089682 4857 scope.go:117] "RemoveContainer" containerID="9fcbaa69ebdd260ce6ee91ef9c08370d43307cd65a1add7b8d0bf0f328d7d16b" Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.136696 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cxdpg"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.159797 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.243892 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-pbgw2"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.320051 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.354679 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.632719 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:06 crc kubenswrapper[4857]: W0318 14:26:06.655056 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaee64023_1977_416f_842c_c767e17a910e.slice/crio-9eceb81685c7ece7f1efc9416c2355076edf9a88e815608c7cad963e4018edb5 WatchSource:0}: Error finding container 9eceb81685c7ece7f1efc9416c2355076edf9a88e815608c7cad963e4018edb5: Status 404 returned error can't find the container with id 9eceb81685c7ece7f1efc9416c2355076edf9a88e815608c7cad963e4018edb5 Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.789864 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.965118 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sbw4r" event={"ID":"2ea129bc-8d82-472e-8c4d-0f1b5e79078e","Type":"ContainerStarted","Data":"6fc05dfc3b2dcd496f5146a3392c9717fa78b490ac763824baaff9c85a6de47a"} Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.978143 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmg7v" event={"ID":"6791c442-3e89-4211-b980-e00afa59d6c1","Type":"ContainerStarted","Data":"519b3f5265e17f38cb7bf63094aede0ebe9f5acecdad2756ac5810459ddc842e"} Mar 18 14:26:06 crc kubenswrapper[4857]: I0318 14:26:06.993510 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sbw4r" podStartSLOduration=5.993486818 podStartE2EDuration="5.993486818s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:26:06.988810681 +0000 UTC m=+1551.117939138" watchObservedRunningTime="2026-03-18 14:26:06.993486818 +0000 UTC m=+1551.122615275" Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.006590 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerStarted","Data":"5fddfd2866749dd8c00fa61c4ec475bebd9819307f45afc49eae65b654bcc07d"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.006656 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerStarted","Data":"30195da7d6feb87f612e6298427c95444882c4af59b96675a5671b99be836388"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.016628 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerStarted","Data":"9eceb81685c7ece7f1efc9416c2355076edf9a88e815608c7cad963e4018edb5"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.022582 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerStarted","Data":"21d2ab10abbedd2a9dc502f712c795d10ab0f89c796501ec85f1ea89d9f45c6e"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.028255 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tpllm" event={"ID":"5181712d-25da-484b-9eb5-3fc9230bab14","Type":"ContainerStarted","Data":"3c469ed2e9e13991ecda54fd4877606220ca50c6f75db3382917ca48be784c32"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.030823 4857 generic.go:334] "Generic (PLEG): container finished" podID="ab24ef5b-3d16-4324-93e3-8e127478a489" containerID="a6f24be92f9b3f98471fa568880e2b653734130042b51aab369f6306e6d36747" exitCode=0 Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.030887 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" event={"ID":"ab24ef5b-3d16-4324-93e3-8e127478a489","Type":"ContainerDied","Data":"a6f24be92f9b3f98471fa568880e2b653734130042b51aab369f6306e6d36747"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.073816 4857 generic.go:334] "Generic (PLEG): container finished" podID="c78d39e8-cd3f-42d6-8722-131392546451" containerID="5f3b0cfd9734cfbf42cb9975364ac040be25269a12f3a7d03e93ed530d4c428c" exitCode=0 Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.073896 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jf484" event={"ID":"c78d39e8-cd3f-42d6-8722-131392546451","Type":"ContainerDied","Data":"5f3b0cfd9734cfbf42cb9975364ac040be25269a12f3a7d03e93ed530d4c428c"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.073928 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jf484" event={"ID":"c78d39e8-cd3f-42d6-8722-131392546451","Type":"ContainerStarted","Data":"457c4c7b38545554d231bf551c36c2b2db8c60eb4acf226d20ebfecad0804a6c"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.078775 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cxdpg" event={"ID":"03c5e747-f831-4a2d-a73f-a26848b5c2a6","Type":"ContainerStarted","Data":"562740759aca043df72154e806af044e0aa5dbc01ef0e793bd2fb80b758be62c"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.112981 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerStarted","Data":"d9064c569b6f9c2a764d6eb83d93872c6b26bc09d8f93b1d53db1d5bffce3e69"} Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.329943 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" path="/var/lib/kubelet/pods/c6f38bca-cc01-4f27-b0b8-9ba8f1743506/volumes" Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.495505 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:07 crc kubenswrapper[4857]: I0318 14:26:07.737574 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.183252 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-jf484" event={"ID":"c78d39e8-cd3f-42d6-8722-131392546451","Type":"ContainerDied","Data":"457c4c7b38545554d231bf551c36c2b2db8c60eb4acf226d20ebfecad0804a6c"} Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.183569 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457c4c7b38545554d231bf551c36c2b2db8c60eb4acf226d20ebfecad0804a6c" Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.187057 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerID="5fddfd2866749dd8c00fa61c4ec475bebd9819307f45afc49eae65b654bcc07d" exitCode=0 Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.188291 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerDied","Data":"5fddfd2866749dd8c00fa61c4ec475bebd9819307f45afc49eae65b654bcc07d"} Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.415953 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.593864 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780440 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780523 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmkv\" (UniqueName: \"kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780583 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780608 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780738 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.780982 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb\") pod \"c78d39e8-cd3f-42d6-8722-131392546451\" (UID: \"c78d39e8-cd3f-42d6-8722-131392546451\") " Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.842904 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv" (OuterVolumeSpecName: "kube-api-access-xdmkv") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "kube-api-access-xdmkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:08 crc kubenswrapper[4857]: I0318 14:26:08.884071 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmkv\" (UniqueName: \"kubernetes.io/projected/c78d39e8-cd3f-42d6-8722-131392546451-kube-api-access-xdmkv\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.232728 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-jf484" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.311348 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.320697 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.321048 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.321202 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.321363 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config" (OuterVolumeSpecName: "config") pod "c78d39e8-cd3f-42d6-8722-131392546451" (UID: "c78d39e8-cd3f-42d6-8722-131392546451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.400820 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.400980 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.400993 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.401040 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.401055 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c78d39e8-cd3f-42d6-8722-131392546451-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.483282 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerStarted","Data":"668d56d2fb3d72a9d4ae7e58012c8f766264fb300c44083a9194b7bb0d8b4bb5"} Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.483363 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerStarted","Data":"3157678ef60502c23c7fd35bbd9d6fb0bc5277fff877bb030bc1eb364d877fff"} Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.483383 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" event={"ID":"ab24ef5b-3d16-4324-93e3-8e127478a489","Type":"ContainerDied","Data":"f6d9c8d61ea4e8ca5d0af7d6933384ef0e6dd7fff9dbe3ec0d4583c049986350"} Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.483402 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d9c8d61ea4e8ca5d0af7d6933384ef0e6dd7fff9dbe3ec0d4583c049986350" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.494739 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.605114 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrf9\" (UniqueName: \"kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9\") pod \"ab24ef5b-3d16-4324-93e3-8e127478a489\" (UID: \"ab24ef5b-3d16-4324-93e3-8e127478a489\") " Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.638283 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9" (OuterVolumeSpecName: "kube-api-access-6nrf9") pod "ab24ef5b-3d16-4324-93e3-8e127478a489" (UID: "ab24ef5b-3d16-4324-93e3-8e127478a489"). InnerVolumeSpecName "kube-api-access-6nrf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.682881 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.700596 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-jf484"] Mar 18 14:26:09 crc kubenswrapper[4857]: I0318 14:26:09.710091 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrf9\" (UniqueName: \"kubernetes.io/projected/ab24ef5b-3d16-4324-93e3-8e127478a489-kube-api-access-6nrf9\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.194037 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564066-5lpxl" Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.225210 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c78d39e8-cd3f-42d6-8722-131392546451" path="/var/lib/kubelet/pods/c78d39e8-cd3f-42d6-8722-131392546451/volumes" Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.226092 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerStarted","Data":"1d7586201171caaa77e621eb1ab78951c28cd47872336c69093b537606c2e781"} Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.226117 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerStarted","Data":"1421116f7ad96b51edcefeb06d8855061c861c989ef286696056d9cc1132130b"} Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.258622 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564060-n7vkq"] Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.279792 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564060-n7vkq"] Mar 18 14:26:11 crc kubenswrapper[4857]: I0318 14:26:11.289547 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" podStartSLOduration=9.289522776 podStartE2EDuration="9.289522776s" podCreationTimestamp="2026-03-18 14:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:26:11.236930266 +0000 UTC m=+1555.366058723" watchObservedRunningTime="2026-03-18 14:26:11.289522776 +0000 UTC m=+1555.418651233" Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.256732 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerStarted","Data":"298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f"} Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.257172 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-log" containerID="cri-o://3157678ef60502c23c7fd35bbd9d6fb0bc5277fff877bb030bc1eb364d877fff" gracePeriod=30 Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.257686 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-httpd" containerID="cri-o://298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f" gracePeriod=30 Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.285435 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerStarted","Data":"418a53a173f3de1e38447b360263535d04d99c261b4a0b8897ee6735183f375e"} Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.285543 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.286343 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-log" containerID="cri-o://1d7586201171caaa77e621eb1ab78951c28cd47872336c69093b537606c2e781" gracePeriod=30 Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.286565 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-httpd" containerID="cri-o://418a53a173f3de1e38447b360263535d04d99c261b4a0b8897ee6735183f375e" gracePeriod=30 Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.306416 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.306382886 podStartE2EDuration="11.306382886s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:26:12.302792916 +0000 UTC m=+1556.431921373" watchObservedRunningTime="2026-03-18 14:26:12.306382886 +0000 UTC m=+1556.435511343" Mar 18 14:26:12 crc kubenswrapper[4857]: I0318 14:26:12.371713 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.371676036 podStartE2EDuration="11.371676036s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:26:12.357849449 +0000 UTC m=+1556.486977906" watchObservedRunningTime="2026-03-18 14:26:12.371676036 +0000 UTC m=+1556.500804493" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.194820 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f" path="/var/lib/kubelet/pods/6c36fcd1-5e20-4a20-a924-2cc0d33e4e5f/volumes" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.326731 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerDied","Data":"3157678ef60502c23c7fd35bbd9d6fb0bc5277fff877bb030bc1eb364d877fff"} Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.326663 4857 generic.go:334] "Generic (PLEG): container finished" podID="aee64023-1977-416f-842c-c767e17a910e" containerID="3157678ef60502c23c7fd35bbd9d6fb0bc5277fff877bb030bc1eb364d877fff" exitCode=143 Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.342808 4857 generic.go:334] "Generic (PLEG): container finished" podID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerID="418a53a173f3de1e38447b360263535d04d99c261b4a0b8897ee6735183f375e" exitCode=143 Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.342844 4857 generic.go:334] "Generic (PLEG): container finished" podID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerID="1d7586201171caaa77e621eb1ab78951c28cd47872336c69093b537606c2e781" exitCode=143 Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.342924 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerDied","Data":"418a53a173f3de1e38447b360263535d04d99c261b4a0b8897ee6735183f375e"} Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.343073 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerDied","Data":"1d7586201171caaa77e621eb1ab78951c28cd47872336c69093b537606c2e781"} Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.531038 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.801833 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:13 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:13 crc kubenswrapper[4857]: > Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.882540 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.896283 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.896356 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.897223 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.897287 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz9lk\" (UniqueName: \"kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.897390 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.897441 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.897489 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs\") pod \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\" (UID: \"5e7565f1-2326-4bab-a9c1-a18dc13d227e\") " Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.898143 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts" (OuterVolumeSpecName: "scripts") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.899415 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs" (OuterVolumeSpecName: "logs") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.901392 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.905709 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.906715 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.906739 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e7565f1-2326-4bab-a9c1-a18dc13d227e-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.912075 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk" (OuterVolumeSpecName: "kube-api-access-vz9lk") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "kube-api-access-vz9lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.936521 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367" (OuterVolumeSpecName: "glance") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "pvc-a61b5137-25a0-4370-8b60-d456f1a37367". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:26:13 crc kubenswrapper[4857]: I0318 14:26:13.945544 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.009473 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") on node \"crc\" " Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.443784 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.445949 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz9lk\" (UniqueName: \"kubernetes.io/projected/5e7565f1-2326-4bab-a9c1-a18dc13d227e-kube-api-access-vz9lk\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.462296 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.474920 4857 generic.go:334] "Generic (PLEG): container finished" podID="aee64023-1977-416f-842c-c767e17a910e" containerID="298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f" exitCode=0 Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.475046 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerDied","Data":"298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f"} Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.480378 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data" (OuterVolumeSpecName: "config-data") pod "5e7565f1-2326-4bab-a9c1-a18dc13d227e" (UID: "5e7565f1-2326-4bab-a9c1-a18dc13d227e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.491178 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5e7565f1-2326-4bab-a9c1-a18dc13d227e","Type":"ContainerDied","Data":"d9064c569b6f9c2a764d6eb83d93872c6b26bc09d8f93b1d53db1d5bffce3e69"} Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.496550 4857 scope.go:117] "RemoveContainer" containerID="418a53a173f3de1e38447b360263535d04d99c261b4a0b8897ee6735183f375e" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.496776 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.526658 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.526844 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a61b5137-25a0-4370-8b60-d456f1a37367" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367") on node "crc" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.552017 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.557556 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.557659 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e7565f1-2326-4bab-a9c1-a18dc13d227e-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.660022 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.687962 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.689225 4857 scope.go:117] "RemoveContainer" containerID="1d7586201171caaa77e621eb1ab78951c28cd47872336c69093b537606c2e781" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.713148 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714008 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab24ef5b-3d16-4324-93e3-8e127478a489" containerName="oc" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714031 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab24ef5b-3d16-4324-93e3-8e127478a489" containerName="oc" Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714069 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-httpd" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714075 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-httpd" Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714093 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78d39e8-cd3f-42d6-8722-131392546451" containerName="init" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714099 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78d39e8-cd3f-42d6-8722-131392546451" containerName="init" Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714111 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="dnsmasq-dns" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714117 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="dnsmasq-dns" Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714132 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="init" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714138 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="init" Mar 18 14:26:14 crc kubenswrapper[4857]: E0318 14:26:14.714150 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-log" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714156 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-log" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714361 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f38bca-cc01-4f27-b0b8-9ba8f1743506" containerName="dnsmasq-dns" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714379 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-httpd" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714389 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" containerName="glance-log" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714404 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78d39e8-cd3f-42d6-8722-131392546451" containerName="init" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.714417 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab24ef5b-3d16-4324-93e3-8e127478a489" containerName="oc" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.717398 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.720644 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.721012 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.725894 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.892564 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.893603 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.899629 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.900061 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.900246 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.900357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.900521 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:14 crc kubenswrapper[4857]: I0318 14:26:14.900668 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhks8\" (UniqueName: \"kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.337069 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhks8\" (UniqueName: \"kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.338481 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.338727 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.338959 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.339178 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.339338 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.339440 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.343683 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.348036 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.357191 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.378220 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.379135 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.384626 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: E0318 14:26:15.405302 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaee64023_1977_416f_842c_c767e17a910e.slice/crio-conmon-298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e7565f1_2326_4bab_a9c1_a18dc13d227e.slice/crio-d9064c569b6f9c2a764d6eb83d93872c6b26bc09d8f93b1d53db1d5bffce3e69\": RecentStats: unable to find data in memory cache]" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.440702 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7565f1-2326-4bab-a9c1-a18dc13d227e" path="/var/lib/kubelet/pods/5e7565f1-2326-4bab-a9c1-a18dc13d227e/volumes" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.441567 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.461210 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhks8\" (UniqueName: \"kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.480384 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.480426 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce4ba88565bbf6eaffcfc19803bd3d9355ffb3d5f28210b890fc7555a4578986/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.635491 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.663023 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.690393 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.748572 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.748707 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.748927 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.748959 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.749008 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.749075 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.749147 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk2hh\" (UniqueName: \"kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.749211 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle\") pod \"aee64023-1977-416f-842c-c767e17a910e\" (UID: \"aee64023-1977-416f-842c-c767e17a910e\") " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.750565 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs" (OuterVolumeSpecName: "logs") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.750742 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.791728 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts" (OuterVolumeSpecName: "scripts") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.791956 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a" (OuterVolumeSpecName: "glance") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.793242 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh" (OuterVolumeSpecName: "kube-api-access-dk2hh") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "kube-api-access-dk2hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.822490 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.831135 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.851098 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.853430 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aee64023-1977-416f-842c-c767e17a910e-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.853657 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk2hh\" (UniqueName: \"kubernetes.io/projected/aee64023-1977-416f-842c-c767e17a910e-kube-api-access-dk2hh\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.853730 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.853812 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.854298 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.854492 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") on node \"crc\" " Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.854872 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data" (OuterVolumeSpecName: "config-data") pod "aee64023-1977-416f-842c-c767e17a910e" (UID: "aee64023-1977-416f-842c-c767e17a910e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.949093 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.949561 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a") on node "crc" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.957268 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:15 crc kubenswrapper[4857]: I0318 14:26:15.957801 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee64023-1977-416f-842c-c767e17a910e-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.566501 4857 generic.go:334] "Generic (PLEG): container finished" podID="e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" containerID="fce90338e24ea1f22326564a290a06286b541c35da0399701d3f9ea0f3146e6c" exitCode=0 Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.566567 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttvd9" event={"ID":"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af","Type":"ContainerDied","Data":"fce90338e24ea1f22326564a290a06286b541c35da0399701d3f9ea0f3146e6c"} Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.571389 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aee64023-1977-416f-842c-c767e17a910e","Type":"ContainerDied","Data":"9eceb81685c7ece7f1efc9416c2355076edf9a88e815608c7cad963e4018edb5"} Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.571433 4857 scope.go:117] "RemoveContainer" containerID="298aa1dd78cd78b0c8f463557f08beee01ca05a304b867307cf0e044cbc6626f" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.571555 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.624005 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.637388 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.653311 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:16 crc kubenswrapper[4857]: E0318 14:26:16.654508 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-log" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.654530 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-log" Mar 18 14:26:16 crc kubenswrapper[4857]: E0318 14:26:16.654538 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-httpd" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.654544 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-httpd" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.655316 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-httpd" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.655498 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee64023-1977-416f-842c-c767e17a910e" containerName="glance-log" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.659128 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.662893 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.663137 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.666228 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.777653 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.777777 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qq4\" (UniqueName: \"kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.777818 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.777900 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.777940 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.778000 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.778021 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.778049 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880565 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880689 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880733 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880810 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880835 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880865 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880908 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.880976 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qq4\" (UniqueName: \"kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.882041 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.882151 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.884791 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.884832 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/53a11717549d4e5fa20456445f0a3110867e942e65caa41580a09c0ef37f0f67/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.886307 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.886315 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.888789 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.904347 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.942935 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.943359 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qq4\" (UniqueName: \"kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4\") pod \"glance-default-external-api-0\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " pod="openstack/glance-default-external-api-0" Mar 18 14:26:16 crc kubenswrapper[4857]: I0318 14:26:16.987351 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:26:17 crc kubenswrapper[4857]: I0318 14:26:17.179373 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee64023-1977-416f-842c-c767e17a910e" path="/var/lib/kubelet/pods/aee64023-1977-416f-842c-c767e17a910e/volumes" Mar 18 14:26:18 crc kubenswrapper[4857]: I0318 14:26:18.122094 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:26:18 crc kubenswrapper[4857]: I0318 14:26:18.268179 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:26:18 crc kubenswrapper[4857]: I0318 14:26:18.268458 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" containerID="cri-o://439c148d08328ffe3560e87ffa596cbf8f933046850ff91fb25462b0f59de394" gracePeriod=10 Mar 18 14:26:18 crc kubenswrapper[4857]: I0318 14:26:18.832099 4857 generic.go:334] "Generic (PLEG): container finished" podID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerID="439c148d08328ffe3560e87ffa596cbf8f933046850ff91fb25462b0f59de394" exitCode=0 Mar 18 14:26:18 crc kubenswrapper[4857]: I0318 14:26:18.832149 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7fcl8" event={"ID":"031b5441-9d41-406b-aea4-47ea37b74a2a","Type":"ContainerDied","Data":"439c148d08328ffe3560e87ffa596cbf8f933046850ff91fb25462b0f59de394"} Mar 18 14:26:20 crc kubenswrapper[4857]: I0318 14:26:20.145220 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Mar 18 14:26:23 crc kubenswrapper[4857]: I0318 14:26:23.648883 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:23 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:23 crc kubenswrapper[4857]: > Mar 18 14:26:24 crc kubenswrapper[4857]: I0318 14:26:24.904086 4857 generic.go:334] "Generic (PLEG): container finished" podID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerID="668d56d2fb3d72a9d4ae7e58012c8f766264fb300c44083a9194b7bb0d8b4bb5" exitCode=0 Mar 18 14:26:24 crc kubenswrapper[4857]: I0318 14:26:24.904172 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerDied","Data":"668d56d2fb3d72a9d4ae7e58012c8f766264fb300c44083a9194b7bb0d8b4bb5"} Mar 18 14:26:25 crc kubenswrapper[4857]: I0318 14:26:25.145490 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Mar 18 14:26:25 crc kubenswrapper[4857]: I0318 14:26:25.413127 4857 scope.go:117] "RemoveContainer" containerID="3157678ef60502c23c7fd35bbd9d6fb0bc5277fff877bb030bc1eb364d877fff" Mar 18 14:26:28 crc kubenswrapper[4857]: E0318 14:26:28.048660 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Mar 18 14:26:28 crc kubenswrapper[4857]: E0318 14:26:28.050234 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tnwg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tpllm_openstack(5181712d-25da-484b-9eb5-3fc9230bab14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:26:28 crc kubenswrapper[4857]: E0318 14:26:28.052034 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tpllm" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.139937 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.206612 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.240459 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data" (OuterVolumeSpecName: "config-data") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.308553 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.308778 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqb42\" (UniqueName: \"kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.308832 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.308912 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.309034 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys\") pod \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\" (UID: \"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af\") " Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.309811 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.316414 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts" (OuterVolumeSpecName: "scripts") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.316708 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.320529 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42" (OuterVolumeSpecName: "kube-api-access-zqb42") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "kube-api-access-zqb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.320784 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.347279 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" (UID: "e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.412438 4857 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.412477 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqb42\" (UniqueName: \"kubernetes.io/projected/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-kube-api-access-zqb42\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.412490 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.412500 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:28 crc kubenswrapper[4857]: I0318 14:26:28.412509 4857 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.014999 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ttvd9" event={"ID":"e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af","Type":"ContainerDied","Data":"061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e"} Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.015055 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="061cc96690bb70da0ec83d4869bb7d327e16c7896af72a89653ca7e287d1914e" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.015070 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ttvd9" Mar 18 14:26:29 crc kubenswrapper[4857]: E0318 14:26:29.020319 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tpllm" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.253225 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ttvd9"] Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.262977 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ttvd9"] Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.347474 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-92hzs"] Mar 18 14:26:29 crc kubenswrapper[4857]: E0318 14:26:29.348520 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" containerName="keystone-bootstrap" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.348548 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" containerName="keystone-bootstrap" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.348870 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" containerName="keystone-bootstrap" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.349992 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.353240 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.353441 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.353684 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.354315 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.356012 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4kgzh" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.361002 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-92hzs"] Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.469776 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbz5\" (UniqueName: \"kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.469906 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.470026 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.470059 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.470233 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.470562 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.573200 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.573367 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.573400 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.573443 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.573546 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.574337 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbz5\" (UniqueName: \"kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.578122 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.578842 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.580300 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.595535 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.611032 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.646518 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbz5\" (UniqueName: \"kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5\") pod \"keystone-bootstrap-92hzs\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.715826 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:26:29 crc kubenswrapper[4857]: I0318 14:26:29.894195 4857 scope.go:117] "RemoveContainer" containerID="62192ef9401dcbaa2a8fd786a343c4153aa36fa692f3b6781234c21bb215ecfd" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.066227 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.066797 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpl2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cxdpg_openstack(03c5e747-f831-4a2d-a73f-a26848b5c2a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.068303 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cxdpg" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" Mar 18 14:26:31 crc kubenswrapper[4857]: I0318 14:26:31.187015 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af" path="/var/lib/kubelet/pods/e97fae11-ac4e-4ad1-a7fa-ca06ffbe69af/volumes" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.392754 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.392951 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72bpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-4sc5j_openstack(fd4c05d5-43c8-4aad-9052-a519d7c6d182): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:26:31 crc kubenswrapper[4857]: E0318 14:26:31.394346 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-4sc5j" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" Mar 18 14:26:32 crc kubenswrapper[4857]: E0318 14:26:32.251454 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-cxdpg" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" Mar 18 14:26:32 crc kubenswrapper[4857]: E0318 14:26:32.251835 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-4sc5j" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" Mar 18 14:26:33 crc kubenswrapper[4857]: I0318 14:26:33.672375 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:33 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:33 crc kubenswrapper[4857]: > Mar 18 14:26:35 crc kubenswrapper[4857]: I0318 14:26:35.146590 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 18 14:26:35 crc kubenswrapper[4857]: I0318 14:26:35.147119 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:26:35 crc kubenswrapper[4857]: I0318 14:26:35.852205 4857 generic.go:334] "Generic (PLEG): container finished" podID="2ea129bc-8d82-472e-8c4d-0f1b5e79078e" containerID="6fc05dfc3b2dcd496f5146a3392c9717fa78b490ac763824baaff9c85a6de47a" exitCode=0 Mar 18 14:26:35 crc kubenswrapper[4857]: I0318 14:26:35.852277 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sbw4r" event={"ID":"2ea129bc-8d82-472e-8c4d-0f1b5e79078e","Type":"ContainerDied","Data":"6fc05dfc3b2dcd496f5146a3392c9717fa78b490ac763824baaff9c85a6de47a"} Mar 18 14:26:40 crc kubenswrapper[4857]: I0318 14:26:40.438304 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 18 14:26:40 crc kubenswrapper[4857]: I0318 14:26:40.831303 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:26:44 crc kubenswrapper[4857]: I0318 14:26:44.072377 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:44 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:44 crc kubenswrapper[4857]: > Mar 18 14:26:45 crc kubenswrapper[4857]: I0318 14:26:45.441363 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 18 14:26:45 crc kubenswrapper[4857]: I0318 14:26:45.986346 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.079821 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config\") pod \"031b5441-9d41-406b-aea4-47ea37b74a2a\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.079888 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb\") pod \"031b5441-9d41-406b-aea4-47ea37b74a2a\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.079935 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zww2h\" (UniqueName: \"kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h\") pod \"031b5441-9d41-406b-aea4-47ea37b74a2a\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.079960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb\") pod \"031b5441-9d41-406b-aea4-47ea37b74a2a\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.080044 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc\") pod \"031b5441-9d41-406b-aea4-47ea37b74a2a\" (UID: \"031b5441-9d41-406b-aea4-47ea37b74a2a\") " Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.103548 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h" (OuterVolumeSpecName: "kube-api-access-zww2h") pod "031b5441-9d41-406b-aea4-47ea37b74a2a" (UID: "031b5441-9d41-406b-aea4-47ea37b74a2a"). InnerVolumeSpecName "kube-api-access-zww2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.137406 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "031b5441-9d41-406b-aea4-47ea37b74a2a" (UID: "031b5441-9d41-406b-aea4-47ea37b74a2a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.137549 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config" (OuterVolumeSpecName: "config") pod "031b5441-9d41-406b-aea4-47ea37b74a2a" (UID: "031b5441-9d41-406b-aea4-47ea37b74a2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.138103 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "031b5441-9d41-406b-aea4-47ea37b74a2a" (UID: "031b5441-9d41-406b-aea4-47ea37b74a2a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.142512 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "031b5441-9d41-406b-aea4-47ea37b74a2a" (UID: "031b5441-9d41-406b-aea4-47ea37b74a2a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.185203 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.185240 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.185257 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zww2h\" (UniqueName: \"kubernetes.io/projected/031b5441-9d41-406b-aea4-47ea37b74a2a-kube-api-access-zww2h\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.185272 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.185284 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/031b5441-9d41-406b-aea4-47ea37b74a2a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.699029 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-7fcl8" event={"ID":"031b5441-9d41-406b-aea4-47ea37b74a2a","Type":"ContainerDied","Data":"2092f8ed20eb5e4cc0d680aa1438e7acb5aa8e1ab6d9d0fe27d6bfc02d6f603d"} Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.699138 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-7fcl8" Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.746652 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:26:46 crc kubenswrapper[4857]: I0318 14:26:46.758200 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-7fcl8"] Mar 18 14:26:46 crc kubenswrapper[4857]: E0318 14:26:46.828927 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 18 14:26:46 crc kubenswrapper[4857]: E0318 14:26:46.829193 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nch59fhc7h698hf6h57ch546h6fh5b7h599h5c7hffh9fh89h64hd6hfh6fh647h549h546h55dh678h64hdch76h599h554h647hb5hb9h68fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4zgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(34da3be3-c034-4c63-866c-57097fb5c847): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:47.997679 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" path="/var/lib/kubelet/pods/031b5441-9d41-406b-aea4-47ea37b74a2a/volumes" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.006350 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sbw4r" event={"ID":"2ea129bc-8d82-472e-8c4d-0f1b5e79078e","Type":"ContainerDied","Data":"85139deb6db0388d399aff8c46b4c102d113b30c53475c95a94d9d61446f6658"} Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.006491 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85139deb6db0388d399aff8c46b4c102d113b30c53475c95a94d9d61446f6658" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.008926 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.050914 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rfsb\" (UniqueName: \"kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb\") pod \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.051096 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config\") pod \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.051531 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle\") pod \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\" (UID: \"2ea129bc-8d82-472e-8c4d-0f1b5e79078e\") " Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.060393 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb" (OuterVolumeSpecName: "kube-api-access-2rfsb") pod "2ea129bc-8d82-472e-8c4d-0f1b5e79078e" (UID: "2ea129bc-8d82-472e-8c4d-0f1b5e79078e"). InnerVolumeSpecName "kube-api-access-2rfsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.090443 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ea129bc-8d82-472e-8c4d-0f1b5e79078e" (UID: "2ea129bc-8d82-472e-8c4d-0f1b5e79078e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.124511 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config" (OuterVolumeSpecName: "config") pod "2ea129bc-8d82-472e-8c4d-0f1b5e79078e" (UID: "2ea129bc-8d82-472e-8c4d-0f1b5e79078e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.156640 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.156686 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:48 crc kubenswrapper[4857]: I0318 14:26:48.156744 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rfsb\" (UniqueName: \"kubernetes.io/projected/2ea129bc-8d82-472e-8c4d-0f1b5e79078e-kube-api-access-2rfsb\") on node \"crc\" DevicePath \"\"" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.303867 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sbw4r" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.859530 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:26:49 crc kubenswrapper[4857]: E0318 14:26:49.860227 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea129bc-8d82-472e-8c4d-0f1b5e79078e" containerName="neutron-db-sync" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.860249 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea129bc-8d82-472e-8c4d-0f1b5e79078e" containerName="neutron-db-sync" Mar 18 14:26:49 crc kubenswrapper[4857]: E0318 14:26:49.860264 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="init" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.860270 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="init" Mar 18 14:26:49 crc kubenswrapper[4857]: E0318 14:26:49.860316 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.860323 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.860547 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea129bc-8d82-472e-8c4d-0f1b5e79078e" containerName="neutron-db-sync" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.860564 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.870425 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.880625 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.975536 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977103 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977160 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977258 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977284 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977300 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977339 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.977851 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.984295 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.984579 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.984766 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.984903 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4xq55" Mar 18 14:26:49 crc kubenswrapper[4857]: I0318 14:26:49.990049 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394478 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394520 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394577 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394670 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zdv7\" (UniqueName: \"kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394771 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394827 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.394878 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.395011 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.395041 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.395079 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.395139 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.396807 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.399289 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.400645 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.402229 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.402852 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.442946 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-7fcl8" podUID="031b5441-9d41-406b-aea4-47ea37b74a2a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: i/o timeout" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.468008 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl\") pod \"dnsmasq-dns-55f844cf75-7k6cz\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.497220 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.497814 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.498000 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.506786 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.507052 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.507230 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zdv7\" (UniqueName: \"kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.508711 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.510031 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.523848 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.548170 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.551676 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zdv7\" (UniqueName: \"kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7\") pod \"neutron-6787dc4b5d-t6ns5\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:50 crc kubenswrapper[4857]: I0318 14:26:50.604949 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:26:51 crc kubenswrapper[4857]: E0318 14:26:51.644500 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 18 14:26:51 crc kubenswrapper[4857]: E0318 14:26:51.646119 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcvls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nmg7v_openstack(6791c442-3e89-4211-b980-e00afa59d6c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:26:51 crc kubenswrapper[4857]: E0318 14:26:51.647422 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nmg7v" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" Mar 18 14:26:51 crc kubenswrapper[4857]: I0318 14:26:51.669898 4857 scope.go:117] "RemoveContainer" containerID="439c148d08328ffe3560e87ffa596cbf8f933046850ff91fb25462b0f59de394" Mar 18 14:26:52 crc kubenswrapper[4857]: I0318 14:26:52.168276 4857 scope.go:117] "RemoveContainer" containerID="da2da7a35bc2b162530e25c975294e402b90f87bc143db75802bb6a98fae381f" Mar 18 14:26:52 crc kubenswrapper[4857]: E0318 14:26:52.653424 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-nmg7v" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" Mar 18 14:26:53 crc kubenswrapper[4857]: I0318 14:26:53.655645 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerStarted","Data":"ea6763ddc5d9c5606395d70574156a7f371abadad5f1e8b5cfd8b2223e7abf51"} Mar 18 14:26:53 crc kubenswrapper[4857]: I0318 14:26:53.695418 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:26:53 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:26:53 crc kubenswrapper[4857]: > Mar 18 14:26:53 crc kubenswrapper[4857]: I0318 14:26:53.772150 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:26:53 crc kubenswrapper[4857]: I0318 14:26:53.905846 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-92hzs"] Mar 18 14:26:54 crc kubenswrapper[4857]: I0318 14:26:54.075774 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:26:54 crc kubenswrapper[4857]: I0318 14:26:54.954376 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tpllm" event={"ID":"5181712d-25da-484b-9eb5-3fc9230bab14","Type":"ContainerStarted","Data":"28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7"} Mar 18 14:26:54 crc kubenswrapper[4857]: I0318 14:26:54.979840 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4sc5j" event={"ID":"fd4c05d5-43c8-4aad-9052-a519d7c6d182","Type":"ContainerStarted","Data":"12907e0236b8db836d4b44514e4494d4cb6867835367a755b97d1eccbe8e64f7"} Mar 18 14:26:54 crc kubenswrapper[4857]: I0318 14:26:54.985673 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:26:54 crc kubenswrapper[4857]: I0318 14:26:54.989655 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-92hzs" event={"ID":"9b4268c3-7d11-484c-8718-736b4fd44de6","Type":"ContainerStarted","Data":"2f9af8323de1260ab912580326da3f9714b77bf81cd299acf81b5ca997ab555b"} Mar 18 14:26:55 crc kubenswrapper[4857]: I0318 14:26:55.007250 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tpllm" podStartSLOduration=6.570111601 podStartE2EDuration="53.007225356s" podCreationTimestamp="2026-03-18 14:26:02 +0000 UTC" firstStartedPulling="2026-03-18 14:26:06.088884527 +0000 UTC m=+1550.218012984" lastFinishedPulling="2026-03-18 14:26:52.525998282 +0000 UTC m=+1596.655126739" observedRunningTime="2026-03-18 14:26:54.99346998 +0000 UTC m=+1599.122598437" watchObservedRunningTime="2026-03-18 14:26:55.007225356 +0000 UTC m=+1599.136353813" Mar 18 14:26:55 crc kubenswrapper[4857]: I0318 14:26:55.008224 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerStarted","Data":"6d2cf65e87d6ce173ea1860231b7077c759189d6c85f8f2181d8b7a364781e43"} Mar 18 14:26:55 crc kubenswrapper[4857]: I0318 14:26:55.018180 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerStarted","Data":"b5fe113423c9d1def3607cc18f5911206ef6a47e0a4799da03adff264933f291"} Mar 18 14:26:55 crc kubenswrapper[4857]: I0318 14:26:55.080509 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-4sc5j" podStartSLOduration=5.169031302 podStartE2EDuration="54.080477029s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="2026-03-18 14:26:03.616498884 +0000 UTC m=+1547.745627341" lastFinishedPulling="2026-03-18 14:26:52.527944611 +0000 UTC m=+1596.657073068" observedRunningTime="2026-03-18 14:26:55.015350781 +0000 UTC m=+1599.144479248" watchObservedRunningTime="2026-03-18 14:26:55.080477029 +0000 UTC m=+1599.209605496" Mar 18 14:26:55 crc kubenswrapper[4857]: I0318 14:26:55.181346 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.038667 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cxdpg" event={"ID":"03c5e747-f831-4a2d-a73f-a26848b5c2a6","Type":"ContainerStarted","Data":"a371c313f22f4bd6912388cf881565ebdec5cd45e8d6f0d8aa0a5352d293572d"} Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.072133 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cxdpg" podStartSLOduration=7.404969532 podStartE2EDuration="54.072109103s" podCreationTimestamp="2026-03-18 14:26:02 +0000 UTC" firstStartedPulling="2026-03-18 14:26:06.088517167 +0000 UTC m=+1550.217645624" lastFinishedPulling="2026-03-18 14:26:52.755656728 +0000 UTC m=+1596.884785195" observedRunningTime="2026-03-18 14:26:56.05646852 +0000 UTC m=+1600.185596987" watchObservedRunningTime="2026-03-18 14:26:56.072109103 +0000 UTC m=+1600.201237560" Mar 18 14:26:56 crc kubenswrapper[4857]: W0318 14:26:56.740057 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeba55967_1b57_413b_a257_4e7894f3b270.slice/crio-e58e58f374b87d126da3704bf66d45cf3045b2f35070855f0599dd04b4368c18 WatchSource:0}: Error finding container e58e58f374b87d126da3704bf66d45cf3045b2f35070855f0599dd04b4368c18: Status 404 returned error can't find the container with id e58e58f374b87d126da3704bf66d45cf3045b2f35070855f0599dd04b4368c18 Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.990539 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.993348 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.996974 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 18 14:26:56 crc kubenswrapper[4857]: I0318 14:26:56.997262 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.031556 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.059069 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.059129 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.084473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm62z\" (UniqueName: \"kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.084563 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.084584 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.084666 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.084704 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.087257 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.087302 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.124311 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerStarted","Data":"3d72e292b8f9b49a5cd6d7cdf0cb870aa7b99cf5ede73b8c3087b2fecff088ed"} Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.126167 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" event={"ID":"eba55967-1b57-413b-a257-4e7894f3b270","Type":"ContainerStarted","Data":"e58e58f374b87d126da3704bf66d45cf3045b2f35070855f0599dd04b4368c18"} Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189234 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189306 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm62z\" (UniqueName: \"kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189483 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189585 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189726 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:57 crc kubenswrapper[4857]: I0318 14:26:57.189860 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.023898 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.026552 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.069291 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.070147 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.086263 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.094546 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.115672 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm62z\" (UniqueName: \"kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z\") pod \"neutron-7c7548b49f-k8mxj\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:58 crc kubenswrapper[4857]: I0318 14:26:58.281551 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:26:59 crc kubenswrapper[4857]: I0318 14:26:59.263537 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerStarted","Data":"bb2ab01878d4c92536a05e4d4e4a0e5dd770a5abc36c462aa7656ddd28f9558b"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.179182 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:27:00 crc kubenswrapper[4857]: W0318 14:27:00.187789 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4891b36_5848_4530_9506_fcc9ee28f279.slice/crio-7e2a8d2edabd8dc5677686bda46cc97c3fb60f5ac1e7909ed1c4895ce8ea350d WatchSource:0}: Error finding container 7e2a8d2edabd8dc5677686bda46cc97c3fb60f5ac1e7909ed1c4895ce8ea350d: Status 404 returned error can't find the container with id 7e2a8d2edabd8dc5677686bda46cc97c3fb60f5ac1e7909ed1c4895ce8ea350d Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.260922 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-92hzs" event={"ID":"9b4268c3-7d11-484c-8718-736b4fd44de6","Type":"ContainerStarted","Data":"f50dc8cb888eab1560efbc5460bc54cc88218bf7266de0c42e2c0a80fc60017c"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.269910 4857 generic.go:334] "Generic (PLEG): container finished" podID="eba55967-1b57-413b-a257-4e7894f3b270" containerID="23e6da3aa0ca612354d1c7e6bdac23fffe187eb96ea7631624fe79b49e2afa0e" exitCode=0 Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.271044 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" event={"ID":"eba55967-1b57-413b-a257-4e7894f3b270","Type":"ContainerDied","Data":"23e6da3aa0ca612354d1c7e6bdac23fffe187eb96ea7631624fe79b49e2afa0e"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.281319 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerStarted","Data":"8e0188370a74f1293b2acaf67ab6ee5b6ac3f308cf28a620efb702c7c15d44d9"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.300329 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-92hzs" podStartSLOduration=31.300281788 podStartE2EDuration="31.300281788s" podCreationTimestamp="2026-03-18 14:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:00.281156257 +0000 UTC m=+1604.410284714" watchObservedRunningTime="2026-03-18 14:27:00.300281788 +0000 UTC m=+1604.429410245" Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.300466 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerStarted","Data":"a386f81110d76903657fc5acab50ac31f955b88c2f0d544459a4f748338fe6b4"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.306246 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerStarted","Data":"a49e274f85f26e83022028c2708ba9020d8e37163cbca7a1af94a7a5026e4e76"} Mar 18 14:27:00 crc kubenswrapper[4857]: I0318 14:27:00.327487 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerStarted","Data":"7e2a8d2edabd8dc5677686bda46cc97c3fb60f5ac1e7909ed1c4895ce8ea350d"} Mar 18 14:27:01 crc kubenswrapper[4857]: I0318 14:27:01.740908 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerStarted","Data":"fc3b904ffe95113c24fdc6f7fb09c322198c53d1d6a1040d9fdf739939f6a099"} Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:02.864653 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.071542 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.122806 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.122862 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.123161 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" podUID="189dc2a2-def0-41c0-9a6d-044db219385c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.139950 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded" start-of-body= Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.140034 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.163470 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.235125 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" podStartSLOduration=16.235097433 podStartE2EDuration="16.235097433s" podCreationTimestamp="2026-03-18 14:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:05.230063817 +0000 UTC m=+1609.359192274" watchObservedRunningTime="2026-03-18 14:27:05.235097433 +0000 UTC m=+1609.364225880" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.253310 4857 generic.go:334] "Generic (PLEG): container finished" podID="5181712d-25da-484b-9eb5-3fc9230bab14" containerID="28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7" exitCode=0 Mar 18 14:27:05 crc kubenswrapper[4857]: E0318 14:27:05.422518 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.469s" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.422624 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.422641 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" event={"ID":"eba55967-1b57-413b-a257-4e7894f3b270","Type":"ContainerStarted","Data":"522d353ce1ff264438d2ad53b03f0dfc30406e8e6372e294f06957e82e9138f9"} Mar 18 14:27:05 crc kubenswrapper[4857]: I0318 14:27:05.454104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tpllm" event={"ID":"5181712d-25da-484b-9eb5-3fc9230bab14","Type":"ContainerDied","Data":"28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7"} Mar 18 14:27:05 crc kubenswrapper[4857]: E0318 14:27:05.644340 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5181712d_25da_484b_9eb5_3fc9230bab14.slice/crio-conmon-28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5181712d_25da_484b_9eb5_3fc9230bab14.slice/crio-28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.503706 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:27:06 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:27:06 crc kubenswrapper[4857]: > Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.537083 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerStarted","Data":"648e2a54151565db1f574654def8c14a4b38c17832f649dfe28132ef526619ab"} Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.538645 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.540276 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerStarted","Data":"b3fd04628ee0ea8357f8a3e4f567c32992da7103a255703b648fb7b0780cd02f"} Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.544839 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerStarted","Data":"fd3f3e4c72979e1311976bba0362715519392668a8948f3eeb9614f335b3f82c"} Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.548501 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerStarted","Data":"431d75572bf41dee46bbb2c87bec9ef742c99daa57fb6ec41e1c2a63cbf78c63"} Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.550061 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.593239 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c7548b49f-k8mxj" podStartSLOduration=10.593202751 podStartE2EDuration="10.593202751s" podCreationTimestamp="2026-03-18 14:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:06.57405996 +0000 UTC m=+1610.703188417" watchObservedRunningTime="2026-03-18 14:27:06.593202751 +0000 UTC m=+1610.722331208" Mar 18 14:27:06 crc kubenswrapper[4857]: I0318 14:27:06.618310 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=52.61826928 podStartE2EDuration="52.61826928s" podCreationTimestamp="2026-03-18 14:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:06.592486103 +0000 UTC m=+1610.721614550" watchObservedRunningTime="2026-03-18 14:27:06.61826928 +0000 UTC m=+1610.747397767" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.093573 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.093874 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.730055 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6787dc4b5d-t6ns5" podStartSLOduration=18.730010281 podStartE2EDuration="18.730010281s" podCreationTimestamp="2026-03-18 14:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:06.631971464 +0000 UTC m=+1610.761099921" watchObservedRunningTime="2026-03-18 14:27:07.730010281 +0000 UTC m=+1611.859138738" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.838047 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.845615 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.845706 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.845719 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.845733 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.861135 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=51.861105424 podStartE2EDuration="51.861105424s" podCreationTimestamp="2026-03-18 14:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:07.155294793 +0000 UTC m=+1611.284423260" watchObservedRunningTime="2026-03-18 14:27:07.861105424 +0000 UTC m=+1611.990233881" Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.878799 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:27:07 crc kubenswrapper[4857]: I0318 14:27:07.879158 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="dnsmasq-dns" containerID="cri-o://1421116f7ad96b51edcefeb06d8855061c861c989ef286696056d9cc1132130b" gracePeriod=10 Mar 18 14:27:08 crc kubenswrapper[4857]: I0318 14:27:08.123585 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: connect: connection refused" Mar 18 14:27:08 crc kubenswrapper[4857]: I0318 14:27:08.987641 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tpllm" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:08.998503 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tpllm" event={"ID":"5181712d-25da-484b-9eb5-3fc9230bab14","Type":"ContainerDied","Data":"3c469ed2e9e13991ecda54fd4877606220ca50c6f75db3382917ca48be784c32"} Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:08.998541 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c469ed2e9e13991ecda54fd4877606220ca50c6f75db3382917ca48be784c32" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.028084 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs\") pod \"5181712d-25da-484b-9eb5-3fc9230bab14\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.028158 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle\") pod \"5181712d-25da-484b-9eb5-3fc9230bab14\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.028186 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnwg2\" (UniqueName: \"kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2\") pod \"5181712d-25da-484b-9eb5-3fc9230bab14\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.028262 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts\") pod \"5181712d-25da-484b-9eb5-3fc9230bab14\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.028301 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data\") pod \"5181712d-25da-484b-9eb5-3fc9230bab14\" (UID: \"5181712d-25da-484b-9eb5-3fc9230bab14\") " Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.030093 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs" (OuterVolumeSpecName: "logs") pod "5181712d-25da-484b-9eb5-3fc9230bab14" (UID: "5181712d-25da-484b-9eb5-3fc9230bab14"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.047325 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerID="1421116f7ad96b51edcefeb06d8855061c861c989ef286696056d9cc1132130b" exitCode=0 Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.047544 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerDied","Data":"1421116f7ad96b51edcefeb06d8855061c861c989ef286696056d9cc1132130b"} Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.049345 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.054471 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.088247 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts" (OuterVolumeSpecName: "scripts") pod "5181712d-25da-484b-9eb5-3fc9230bab14" (UID: "5181712d-25da-484b-9eb5-3fc9230bab14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.093394 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2" (OuterVolumeSpecName: "kube-api-access-tnwg2") pod "5181712d-25da-484b-9eb5-3fc9230bab14" (UID: "5181712d-25da-484b-9eb5-3fc9230bab14"). InnerVolumeSpecName "kube-api-access-tnwg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.144407 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5181712d-25da-484b-9eb5-3fc9230bab14-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.144639 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnwg2\" (UniqueName: \"kubernetes.io/projected/5181712d-25da-484b-9eb5-3fc9230bab14-kube-api-access-tnwg2\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.144652 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.784991 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data" (OuterVolumeSpecName: "config-data") pod "5181712d-25da-484b-9eb5-3fc9230bab14" (UID: "5181712d-25da-484b-9eb5-3fc9230bab14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.845314 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5181712d-25da-484b-9eb5-3fc9230bab14" (UID: "5181712d-25da-484b-9eb5-3fc9230bab14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.866651 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.866684 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5181712d-25da-484b-9eb5-3fc9230bab14-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:09 crc kubenswrapper[4857]: I0318 14:27:09.943419 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073317 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073406 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073574 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073622 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkpw9\" (UniqueName: \"kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073713 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.073791 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:10 crc kubenswrapper[4857]: I0318 14:27:10.091423 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9" (OuterVolumeSpecName: "kube-api-access-xkpw9") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "kube-api-access-xkpw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.056364 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkpw9\" (UniqueName: \"kubernetes.io/projected/fe3a063b-7a8d-46ea-9729-c78323df9c16-kube-api-access-xkpw9\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.102687 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.125054 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.137043 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.143572 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tpllm" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.144108 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" event={"ID":"fe3a063b-7a8d-46ea-9729-c78323df9c16","Type":"ContainerDied","Data":"30195da7d6feb87f612e6298427c95444882c4af59b96675a5671b99be836388"} Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.144191 4857 scope.go:117] "RemoveContainer" containerID="1421116f7ad96b51edcefeb06d8855061c861c989ef286696056d9cc1132130b" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.144395 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pgqhf" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.149030 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.159262 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.159291 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.159302 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.193011 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.193816 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.259791 4857 scope.go:117] "RemoveContainer" containerID="5fddfd2866749dd8c00fa61c4ec475bebd9819307f45afc49eae65b654bcc07d" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.262990 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config" (OuterVolumeSpecName: "config") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.271350 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") pod \"fe3a063b-7a8d-46ea-9729-c78323df9c16\" (UID: \"fe3a063b-7a8d-46ea-9729-c78323df9c16\") " Mar 18 14:27:11 crc kubenswrapper[4857]: W0318 14:27:11.273149 4857 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fe3a063b-7a8d-46ea-9729-c78323df9c16/volumes/kubernetes.io~configmap/config Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.273181 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config" (OuterVolumeSpecName: "config") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.335083 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe3a063b-7a8d-46ea-9729-c78323df9c16" (UID: "fe3a063b-7a8d-46ea-9729-c78323df9c16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.375508 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:11 crc kubenswrapper[4857]: I0318 14:27:11.375555 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe3a063b-7a8d-46ea-9729-c78323df9c16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:12 crc kubenswrapper[4857]: I0318 14:27:12.494615 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:12 crc kubenswrapper[4857]: I0318 14:27:12.494641 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:12 crc kubenswrapper[4857]: I0318 14:27:12.562656 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:27:12 crc kubenswrapper[4857]: I0318 14:27:12.571162 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pgqhf"] Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.460432 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" probeResult="failure" output=< Mar 18 14:27:14 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:27:14 crc kubenswrapper[4857]: > Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.504665 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" path="/var/lib/kubelet/pods/fe3a063b-7a8d-46ea-9729-c78323df9c16/volumes" Mar 18 14:27:14 crc kubenswrapper[4857]: E0318 14:27:14.506325 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.506468 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmg7v" event={"ID":"6791c442-3e89-4211-b980-e00afa59d6c1","Type":"ContainerStarted","Data":"ac303c9ce4edd411fc80758bae6e07e3f1f9d86bb886768228e72a451c481388"} Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.513309 4857 generic.go:334] "Generic (PLEG): container finished" podID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" containerID="a371c313f22f4bd6912388cf881565ebdec5cd45e8d6f0d8aa0a5352d293572d" exitCode=0 Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.513393 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cxdpg" event={"ID":"03c5e747-f831-4a2d-a73f-a26848b5c2a6","Type":"ContainerDied","Data":"a371c313f22f4bd6912388cf881565ebdec5cd45e8d6f0d8aa0a5352d293572d"} Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.566159 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nmg7v" podStartSLOduration=12.867418249 podStartE2EDuration="1m13.566114123s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="2026-03-18 14:26:05.877829128 +0000 UTC m=+1550.006957585" lastFinishedPulling="2026-03-18 14:27:06.576525002 +0000 UTC m=+1610.705653459" observedRunningTime="2026-03-18 14:27:14.53055037 +0000 UTC m=+1618.659678827" watchObservedRunningTime="2026-03-18 14:27:14.566114123 +0000 UTC m=+1618.695242590" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.866240 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:27:14 crc kubenswrapper[4857]: E0318 14:27:14.866932 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="dnsmasq-dns" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.866957 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="dnsmasq-dns" Mar 18 14:27:14 crc kubenswrapper[4857]: E0318 14:27:14.866992 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" containerName="placement-db-sync" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.867001 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" containerName="placement-db-sync" Mar 18 14:27:14 crc kubenswrapper[4857]: E0318 14:27:14.867026 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="init" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.867034 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="init" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.867343 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" containerName="placement-db-sync" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.867393 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3a063b-7a8d-46ea-9729-c78323df9c16" containerName="dnsmasq-dns" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.869140 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.878809 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-skh4r" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.879163 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.879301 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.879400 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.897190 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.911020 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.969559 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.969690 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.969883 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77nbf\" (UniqueName: \"kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.969912 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.969994 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.970051 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:14 crc kubenswrapper[4857]: I0318 14:27:14.970133 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072270 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072392 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072532 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77nbf\" (UniqueName: \"kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072566 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072646 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072703 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.072786 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.075023 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:15 crc kubenswrapper[4857]: I0318 14:27:15.100651 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.167662 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.178951 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.260503 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77nbf\" (UniqueName: \"kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.322009 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.322051 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.322063 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.322073 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.349313 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.365696 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.367628 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.466323 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs\") pod \"placement-9554cfcb4-bkg8z\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:16 crc kubenswrapper[4857]: I0318 14:27:16.709772 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:21 crc kubenswrapper[4857]: I0318 14:27:21.704019 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" podUID="189dc2a2-def0-41c0-9a6d-044db219385c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 14:27:21 crc kubenswrapper[4857]: I0318 14:27:21.816721 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.286028 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.286400 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.301104 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8081/live\": write tcp 10.217.0.2:49222->10.217.0.57:8081: write: broken pipe" start-of-body= Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.301155 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/live\": write tcp 10.217.0.2:49222->10.217.0.57:8081: write: broken pipe" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.331620 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.369332 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.369397 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.371592 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.371633 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.377591 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8083/live\": context deadline exceeded" start-of-body= Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.377695 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/live\": context deadline exceeded" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.470481 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:24 crc kubenswrapper[4857]: I0318 14:27:24.475955 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.987242 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.987905 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": context deadline exceeded" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988058 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988082 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988118 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988136 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988169 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988184 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988239 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": context deadline exceeded" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:26.988260 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": context deadline exceeded" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.292893 4857 trace.go:236] Trace[1212260426]: "Calculate volume metrics of config-data for pod openstack/keystone-bootstrap-92hzs" (18-Mar-2026 14:27:18.065) (total time: 9227ms): Mar 18 14:27:27 crc kubenswrapper[4857]: Trace[1212260426]: [9.227551417s] [9.227551417s] END Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.293616 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.903377538s: [/var/lib/containers/storage/overlay/3a53f7c4995ced5a998a2daacb255a48b9bc735f3b0d54362b891014f08a5689/diff /var/log/pods/openshift-monitoring_prometheus-operator-db54df47d-zv2t7_26e4c4bc-7edb-45a7-8856-3a9e0146fcea/prometheus-operator/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.297425 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.906220059s: [/var/lib/containers/storage/overlay/c4e0eac2f5d2470658fdb03972ff10ac7d85e7556c862d910b18907cf45cf0c6/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-cert-regeneration-controller/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.298007 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.906798784s: [/var/lib/containers/storage/overlay/ee7fd70970476ef71bb3548fe74e9824c093c67893908dbdeebf3106f1a5c3b9/diff /var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gjnmh_d50187d9-f94c-4f95-87f4-1065bb1d9eed/ovnkube-controller/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.305365 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.90627447s: [/var/lib/containers/storage/overlay/949019a0e5c40a95d290851c7790904b4bcab07eb1c6f56213dfcb3c37ce8098/diff ]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.311057 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.905675945s: [/var/lib/containers/storage/overlay/31624d525758c11fbc32372526fd122af961a1dcce41ecf46e6e0a64542a254e/diff /var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-86c8cb9b45-kxpht_e5ba6b5a-524d-488a-9435-5fea2c394e6a/manager/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.311534 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.906068225s: [/var/lib/containers/storage/overlay/ac66043a5e316900bd182fc9b4454e33638b3ef5df20158c6d25b14ccefd8397/diff ]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.316494 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.910667351s: [/var/lib/containers/storage/overlay/1e6c73db12b8ef26f0cbc6403556d224e03382b7884e23d5a6df7a98380e9f4f/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-cert-syncer/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.321104 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.895176641s: [/var/lib/containers/storage/overlay/d00d8701fd6b135836a0d7dd365915fae4c3f3b6789de3c70b9683ee8c34b713/diff /var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-w5jpj_206851e1-412e-4888-9635-f8eca5aa579e/gateway/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.322340 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 14:27:27 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:27:27 crc kubenswrapper[4857]: > Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.322422 4857 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.892617737s: [/var/lib/containers/storage/overlay/fd125510a8cefb9e6351a3b7460b3f92caa1e35df8b6af37bc6290d407bc6ec1/diff /var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-bl8th_9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e/gateway/0.log]; will not log again for this container unless duration exceeds 2s Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.330182 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.330381 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 14:27:27 crc kubenswrapper[4857]: E0318 14:27:27.367492 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.651s" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.370164 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.370215 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.374461 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podUID="2cbcf5ed-41b1-4596-8e5d-05212018ba3b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.425489 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.533897 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:27:27 crc kubenswrapper[4857]: I0318 14:27:27.703695 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:27:28 crc kubenswrapper[4857]: I0318 14:27:28.164057 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 18 14:27:28 crc kubenswrapper[4857]: I0318 14:27:28.164181 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:28 crc kubenswrapper[4857]: I0318 14:27:28.166843 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 18 14:27:30 crc kubenswrapper[4857]: I0318 14:27:30.002067 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7c7548b49f-k8mxj" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:30 crc kubenswrapper[4857]: I0318 14:27:30.029948 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7c7548b49f-k8mxj" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:30 crc kubenswrapper[4857]: I0318 14:27:30.034052 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7c7548b49f-k8mxj" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:27:30 crc kubenswrapper[4857]: I0318 14:27:30.089618 4857 generic.go:334] "Generic (PLEG): container finished" podID="9b4268c3-7d11-484c-8718-736b4fd44de6" containerID="f50dc8cb888eab1560efbc5460bc54cc88218bf7266de0c42e2c0a80fc60017c" exitCode=0 Mar 18 14:27:30 crc kubenswrapper[4857]: I0318 14:27:30.091183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-92hzs" event={"ID":"9b4268c3-7d11-484c-8718-736b4fd44de6","Type":"ContainerDied","Data":"f50dc8cb888eab1560efbc5460bc54cc88218bf7266de0c42e2c0a80fc60017c"} Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.463145 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.466472 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": dial tcp 10.217.0.55:3101: i/o timeout" start-of-body= Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.466540 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": dial tcp 10.217.0.55:3101: i/o timeout" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.470466 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.470517 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.470581 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podUID="2fc1a575-873e-43b1-9707-bc6247ec8bbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.489183 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z72sl" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" probeResult="failure" output=< Mar 18 14:27:31 crc kubenswrapper[4857]: timeout: health rpc did not complete within 1s Mar 18 14:27:31 crc kubenswrapper[4857]: > Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.510407 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podUID="2fc1a575-873e-43b1-9707-bc6247ec8bbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.513589 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.513660 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.514349 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": dial tcp 10.217.0.121:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.606169 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" containerID="cri-o://6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" gracePeriod=2 Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.614050 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:27:31 crc kubenswrapper[4857]: I0318 14:27:31.628193 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 14:27:32 crc kubenswrapper[4857]: E0318 14:27:32.616310 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2 is running failed: container process not found" containerID="6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:27:32 crc kubenswrapper[4857]: E0318 14:27:32.620184 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2 is running failed: container process not found" containerID="6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:27:32 crc kubenswrapper[4857]: E0318 14:27:32.620689 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2 is running failed: container process not found" containerID="6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:27:32 crc kubenswrapper[4857]: E0318 14:27:32.620725 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b98g7" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.631103 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.640531 4857 generic.go:334] "Generic (PLEG): container finished" podID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerID="6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" exitCode=0 Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.640632 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerDied","Data":"6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2"} Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.658194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-92hzs" event={"ID":"9b4268c3-7d11-484c-8718-736b4fd44de6","Type":"ContainerDied","Data":"2f9af8323de1260ab912580326da3f9714b77bf81cd299acf81b5ca997ab555b"} Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.658257 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9af8323de1260ab912580326da3f9714b77bf81cd299acf81b5ca997ab555b" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.658433 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-92hzs" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.669668 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.693610 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cxdpg" event={"ID":"03c5e747-f831-4a2d-a73f-a26848b5c2a6","Type":"ContainerDied","Data":"562740759aca043df72154e806af044e0aa5dbc01ef0e793bd2fb80b758be62c"} Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.693698 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="562740759aca043df72154e806af044e0aa5dbc01ef0e793bd2fb80b758be62c" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738586 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data\") pod \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738721 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738776 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738818 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738854 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.738905 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.739013 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbz5\" (UniqueName: \"kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5\") pod \"9b4268c3-7d11-484c-8718-736b4fd44de6\" (UID: \"9b4268c3-7d11-484c-8718-736b4fd44de6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.739060 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle\") pod \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.739120 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpl2t\" (UniqueName: \"kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t\") pod \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\" (UID: \"03c5e747-f831-4a2d-a73f-a26848b5c2a6\") " Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.746120 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5" (OuterVolumeSpecName: "kube-api-access-xfbz5") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "kube-api-access-xfbz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.746581 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.748794 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts" (OuterVolumeSpecName: "scripts") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.754943 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.755013 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "03c5e747-f831-4a2d-a73f-a26848b5c2a6" (UID: "03c5e747-f831-4a2d-a73f-a26848b5c2a6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.781050 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t" (OuterVolumeSpecName: "kube-api-access-lpl2t") pod "03c5e747-f831-4a2d-a73f-a26848b5c2a6" (UID: "03c5e747-f831-4a2d-a73f-a26848b5c2a6"). InnerVolumeSpecName "kube-api-access-lpl2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.806188 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data" (OuterVolumeSpecName: "config-data") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.823033 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03c5e747-f831-4a2d-a73f-a26848b5c2a6" (UID: "03c5e747-f831-4a2d-a73f-a26848b5c2a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841608 4857 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841644 4857 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841654 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841662 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841672 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbz5\" (UniqueName: \"kubernetes.io/projected/9b4268c3-7d11-484c-8718-736b4fd44de6-kube-api-access-xfbz5\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841682 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841691 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpl2t\" (UniqueName: \"kubernetes.io/projected/03c5e747-f831-4a2d-a73f-a26848b5c2a6-kube-api-access-lpl2t\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.841699 4857 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/03c5e747-f831-4a2d-a73f-a26848b5c2a6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.874073 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b4268c3-7d11-484c-8718-736b4fd44de6" (UID: "9b4268c3-7d11-484c-8718-736b4fd44de6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:32 crc kubenswrapper[4857]: I0318 14:27:32.944830 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4268c3-7d11-484c-8718-736b4fd44de6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:33 crc kubenswrapper[4857]: I0318 14:27:33.051868 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:33 crc kubenswrapper[4857]: I0318 14:27:33.052038 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:33 crc kubenswrapper[4857]: I0318 14:27:33.052901 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 18 14:27:33 crc kubenswrapper[4857]: I0318 14:27:33.913982 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cxdpg" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.057165 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6bddf5f585-25djb"] Mar 18 14:27:34 crc kubenswrapper[4857]: E0318 14:27:34.058266 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" containerName="barbican-db-sync" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.058362 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" containerName="barbican-db-sync" Mar 18 14:27:34 crc kubenswrapper[4857]: E0318 14:27:34.058455 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4268c3-7d11-484c-8718-736b4fd44de6" containerName="keystone-bootstrap" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.058527 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4268c3-7d11-484c-8718-736b4fd44de6" containerName="keystone-bootstrap" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.059188 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4268c3-7d11-484c-8718-736b4fd44de6" containerName="keystone-bootstrap" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.059292 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" containerName="barbican-db-sync" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.060661 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.064088 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.066232 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.066412 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.066636 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.068059 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.068131 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4kgzh" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.079373 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bddf5f585-25djb"] Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784493 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-scripts\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784574 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-config-data\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784655 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-internal-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784708 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-fernet-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784738 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-combined-ca-bundle\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784951 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-credential-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.784978 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-public-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.785075 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztck\" (UniqueName: \"kubernetes.io/projected/d94f0649-a747-48de-bb74-4db5047cf5d5-kube-api-access-xztck\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887058 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-credential-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887352 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-public-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887429 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xztck\" (UniqueName: \"kubernetes.io/projected/d94f0649-a747-48de-bb74-4db5047cf5d5-kube-api-access-xztck\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-scripts\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887524 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-config-data\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887617 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-internal-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887708 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-fernet-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.887743 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-combined-ca-bundle\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.920007 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-public-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.932326 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-scripts\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.936613 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-config-data\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.940665 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-combined-ca-bundle\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.941454 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-credential-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.952908 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-internal-tls-certs\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.961198 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xztck\" (UniqueName: \"kubernetes.io/projected/d94f0649-a747-48de-bb74-4db5047cf5d5-kube-api-access-xztck\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:34 crc kubenswrapper[4857]: I0318 14:27:34.980351 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d94f0649-a747-48de-bb74-4db5047cf5d5-fernet-keys\") pod \"keystone-6bddf5f585-25djb\" (UID: \"d94f0649-a747-48de-bb74-4db5047cf5d5\") " pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:34.990972 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.007566 4857 generic.go:334] "Generic (PLEG): container finished" podID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" containerID="12907e0236b8db836d4b44514e4494d4cb6867835367a755b97d1eccbe8e64f7" exitCode=0 Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.007635 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4sc5j" event={"ID":"fd4c05d5-43c8-4aad-9052-a519d7c6d182","Type":"ContainerDied","Data":"12907e0236b8db836d4b44514e4494d4cb6867835367a755b97d1eccbe8e64f7"} Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.014583 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.018989 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.030519 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.030858 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.031020 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bhdh5" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.100286 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.115472 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.120703 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.128660 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.135666 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.135788 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.135853 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.135907 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.135949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwj2\" (UniqueName: \"kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.173738 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.238696 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwj2\" (UniqueName: \"kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239089 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239212 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239367 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dp5\" (UniqueName: \"kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239449 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239560 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239659 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239767 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.239899 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.240015 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.242088 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.244435 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.249668 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.250522 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.257002 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.272433 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.306683 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwj2\" (UniqueName: \"kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2\") pod \"barbican-worker-f5657b887-87l4t\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.315691 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342318 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342367 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342426 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz7z\" (UniqueName: \"kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342490 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4dp5\" (UniqueName: \"kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342546 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342623 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342664 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342723 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342777 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.342826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.346326 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.361693 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.364876 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b8d49d4dc-q2jgf"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.367218 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.391514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.400716 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.401428 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.415619 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4dp5\" (UniqueName: \"kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5\") pod \"barbican-keystone-listener-89f9cddcb-2jcgs\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.445403 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.445921 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.445975 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.446023 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsz7z\" (UniqueName: \"kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.446150 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.446248 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.447310 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.447966 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.448592 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.448776 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.454677 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.459503 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b8d49d4dc-q2jgf"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.477458 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.478687 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7cc94898c8-q6kp6"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.481449 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.544113 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cc94898c8-q6kp6"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.548881 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qn46\" (UniqueName: \"kubernetes.io/projected/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-kube-api-access-2qn46\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.549049 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-combined-ca-bundle\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.554998 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-logs\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.555125 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data-custom\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.555210 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.599657 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsz7z\" (UniqueName: \"kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z\") pod \"dnsmasq-dns-85ff748b95-4ql88\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.661482 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.678188 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.691961 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.692544 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-logs\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.692599 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data-custom\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693031 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693134 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qn46\" (UniqueName: \"kubernetes.io/projected/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-kube-api-access-2qn46\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693182 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693237 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-combined-ca-bundle\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693301 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693345 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data-custom\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693380 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-combined-ca-bundle\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693428 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsfz\" (UniqueName: \"kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693456 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-logs\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693478 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693547 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.693961 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.694016 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg49\" (UniqueName: \"kubernetes.io/projected/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-kube-api-access-ghg49\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.696892 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-logs\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.718180 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.722056 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-config-data-custom\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.722772 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-combined-ca-bundle\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.732719 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qn46\" (UniqueName: \"kubernetes.io/projected/cec7fb8b-0248-4c9b-ba87-9d0840a07ce7-kube-api-access-2qn46\") pod \"barbican-worker-5b8d49d4dc-q2jgf\" (UID: \"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7\") " pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.733566 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.734402 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.744939 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798561 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-combined-ca-bundle\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798654 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798695 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data-custom\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798768 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsfz\" (UniqueName: \"kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798792 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-logs\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798816 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798959 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.798995 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.799040 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg49\" (UniqueName: \"kubernetes.io/projected/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-kube-api-access-ghg49\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.799207 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.804259 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-logs\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.804938 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.805565 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.806807 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.808949 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-combined-ca-bundle\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.811473 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-config-data-custom\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.813368 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.817559 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.853925 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg49\" (UniqueName: \"kubernetes.io/projected/3cc875e0-0e5b-446b-8836-5c8b3ceb9736-kube-api-access-ghg49\") pod \"barbican-keystone-listener-7cc94898c8-q6kp6\" (UID: \"3cc875e0-0e5b-446b-8836-5c8b3ceb9736\") " pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:35 crc kubenswrapper[4857]: I0318 14:27:35.864830 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsfz\" (UniqueName: \"kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz\") pod \"barbican-api-786dc49864-sjmlm\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:36 crc kubenswrapper[4857]: I0318 14:27:36.037075 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:36 crc kubenswrapper[4857]: I0318 14:27:36.135804 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" Mar 18 14:27:39 crc kubenswrapper[4857]: I0318 14:27:39.949552 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-758d4bf778-sxwcw"] Mar 18 14:27:39 crc kubenswrapper[4857]: I0318 14:27:39.957314 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:39 crc kubenswrapper[4857]: I0318 14:27:39.961773 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 18 14:27:39 crc kubenswrapper[4857]: I0318 14:27:39.967518 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.024249 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-758d4bf778-sxwcw"] Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.157584 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8kr\" (UniqueName: \"kubernetes.io/projected/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-kube-api-access-fs8kr\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.157697 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-combined-ca-bundle\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.157728 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data-custom\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.157940 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-public-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.158038 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-logs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.158150 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-internal-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.158265 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.260618 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8kr\" (UniqueName: \"kubernetes.io/projected/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-kube-api-access-fs8kr\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.261105 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-combined-ca-bundle\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.261523 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data-custom\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.261722 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-public-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.261888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-logs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.262194 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-internal-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.262330 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.262252 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-logs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.268672 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-combined-ca-bundle\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.269858 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-public-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.278327 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-internal-tls-certs\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.279743 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data-custom\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.281063 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-config-data\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.281344 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8kr\" (UniqueName: \"kubernetes.io/projected/40d7e3cc-c623-483b-bbd0-f88a2246cf7b-kube-api-access-fs8kr\") pod \"barbican-api-758d4bf778-sxwcw\" (UID: \"40d7e3cc-c623-483b-bbd0-f88a2246cf7b\") " pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:40 crc kubenswrapper[4857]: I0318 14:27:40.288572 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.187863 4857 generic.go:334] "Generic (PLEG): container finished" podID="6791c442-3e89-4211-b980-e00afa59d6c1" containerID="ac303c9ce4edd411fc80758bae6e07e3f1f9d86bb886768228e72a451c481388" exitCode=0 Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.187954 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmg7v" event={"ID":"6791c442-3e89-4211-b980-e00afa59d6c1","Type":"ContainerDied","Data":"ac303c9ce4edd411fc80758bae6e07e3f1f9d86bb886768228e72a451c481388"} Mar 18 14:27:41 crc kubenswrapper[4857]: E0318 14:27:41.346954 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Mar 18 14:27:41 crc kubenswrapper[4857]: E0318 14:27:41.347582 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4zgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(34da3be3-c034-4c63-866c-57097fb5c847): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.498025 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4sc5j" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.506067 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687442 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content\") pod \"54bc8846-fa5e-4a90-af94-4b44e6bde172\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687563 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities\") pod \"54bc8846-fa5e-4a90-af94-4b44e6bde172\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687595 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72bpz\" (UniqueName: \"kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz\") pod \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687653 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data\") pod \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687721 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle\") pod \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\" (UID: \"fd4c05d5-43c8-4aad-9052-a519d7c6d182\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.687898 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxzlc\" (UniqueName: \"kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc\") pod \"54bc8846-fa5e-4a90-af94-4b44e6bde172\" (UID: \"54bc8846-fa5e-4a90-af94-4b44e6bde172\") " Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.688638 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities" (OuterVolumeSpecName: "utilities") pod "54bc8846-fa5e-4a90-af94-4b44e6bde172" (UID: "54bc8846-fa5e-4a90-af94-4b44e6bde172"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.713098 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz" (OuterVolumeSpecName: "kube-api-access-72bpz") pod "fd4c05d5-43c8-4aad-9052-a519d7c6d182" (UID: "fd4c05d5-43c8-4aad-9052-a519d7c6d182"). InnerVolumeSpecName "kube-api-access-72bpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.713171 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc" (OuterVolumeSpecName: "kube-api-access-gxzlc") pod "54bc8846-fa5e-4a90-af94-4b44e6bde172" (UID: "54bc8846-fa5e-4a90-af94-4b44e6bde172"). InnerVolumeSpecName "kube-api-access-gxzlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.743120 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4c05d5-43c8-4aad-9052-a519d7c6d182" (UID: "fd4c05d5-43c8-4aad-9052-a519d7c6d182"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.795731 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.795982 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72bpz\" (UniqueName: \"kubernetes.io/projected/fd4c05d5-43c8-4aad-9052-a519d7c6d182-kube-api-access-72bpz\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.796002 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.796015 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxzlc\" (UniqueName: \"kubernetes.io/projected/54bc8846-fa5e-4a90-af94-4b44e6bde172-kube-api-access-gxzlc\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:41 crc kubenswrapper[4857]: I0318 14:27:41.958689 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data" (OuterVolumeSpecName: "config-data") pod "fd4c05d5-43c8-4aad-9052-a519d7c6d182" (UID: "fd4c05d5-43c8-4aad-9052-a519d7c6d182"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.007315 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4c05d5-43c8-4aad-9052-a519d7c6d182-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.025557 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54bc8846-fa5e-4a90-af94-4b44e6bde172" (UID: "54bc8846-fa5e-4a90-af94-4b44e6bde172"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.114552 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bc8846-fa5e-4a90-af94-4b44e6bde172-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.241066 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4sc5j" event={"ID":"fd4c05d5-43c8-4aad-9052-a519d7c6d182","Type":"ContainerDied","Data":"b2b3806152b06f5b37955f870076a24993763fd71f66464820164243ecd5d064"} Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.241128 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b3806152b06f5b37955f870076a24993763fd71f66464820164243ecd5d064" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.241994 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4sc5j" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.272612 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b98g7" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.277220 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b98g7" event={"ID":"54bc8846-fa5e-4a90-af94-4b44e6bde172","Type":"ContainerDied","Data":"c421186901b636497a5a327568858607b19a69fad96d585294380c3d408188ab"} Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.277438 4857 scope.go:117] "RemoveContainer" containerID="6d98144d566ba9002ab7acad0d9d7f8c4604477bd24cb5a6ee66845bb9c634d2" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.379496 4857 scope.go:117] "RemoveContainer" containerID="e01c807e426a367deb3a1e94d26f9f1c3255210cd7514ba39266eefe3cb854b1" Mar 18 14:27:42 crc kubenswrapper[4857]: I0318 14:27:42.957178 4857 scope.go:117] "RemoveContainer" containerID="9efc2c2ecd9674c1d03eb8ce6f52f0554f7edb68a20263e56e420fcbf28a83c7" Mar 18 14:27:42 crc kubenswrapper[4857]: W0318 14:27:42.993556 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c0d12e_fe57_42ab_bb2c_3f7c8a4af061.slice/crio-b58173bb4208b46badedfe288366a99f599dca3fb4537a696febb83d69b8f080 WatchSource:0}: Error finding container b58173bb4208b46badedfe288366a99f599dca3fb4537a696febb83d69b8f080: Status 404 returned error can't find the container with id b58173bb4208b46badedfe288366a99f599dca3fb4537a696febb83d69b8f080 Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.002215 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.043636 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.055347 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b98g7"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.276953 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" path="/var/lib/kubelet/pods/54bc8846-fa5e-4a90-af94-4b44e6bde172/volumes" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.278710 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.289111 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerStarted","Data":"b58173bb4208b46badedfe288366a99f599dca3fb4537a696febb83d69b8f080"} Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.291170 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerStarted","Data":"68dbe674c6b0b1e8c1572f8e0ff6f512c7ad0c761b736e13afb9f077b55ea666"} Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.456001 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.621733 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.621865 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.621920 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.622154 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.622141 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.622307 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcvls\" (UniqueName: \"kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.622367 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle\") pod \"6791c442-3e89-4211-b980-e00afa59d6c1\" (UID: \"6791c442-3e89-4211-b980-e00afa59d6c1\") " Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.622934 4857 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6791c442-3e89-4211-b980-e00afa59d6c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.627714 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.629927 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts" (OuterVolumeSpecName: "scripts") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.657145 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls" (OuterVolumeSpecName: "kube-api-access-xcvls") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "kube-api-access-xcvls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.719094 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bddf5f585-25djb"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.719504 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.725303 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.725329 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcvls\" (UniqueName: \"kubernetes.io/projected/6791c442-3e89-4211-b980-e00afa59d6c1-kube-api-access-xcvls\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.725342 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.725350 4857 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:43 crc kubenswrapper[4857]: W0318 14:27:43.725857 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc875e0_0e5b_446b_8836_5c8b3ceb9736.slice/crio-562f9cb359303d1897b01333d016af08c7330658597b44ef1ef112670f37ad20 WatchSource:0}: Error finding container 562f9cb359303d1897b01333d016af08c7330658597b44ef1ef112670f37ad20: Status 404 returned error can't find the container with id 562f9cb359303d1897b01333d016af08c7330658597b44ef1ef112670f37ad20 Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.732217 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7cc94898c8-q6kp6"] Mar 18 14:27:43 crc kubenswrapper[4857]: W0318 14:27:43.738357 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd94f0649_a747_48de_bb74_4db5047cf5d5.slice/crio-3f74bd1f111bda1f2ed880773fb2ef89a7d5c1f560bc57f8d94dcd43ab5b486d WatchSource:0}: Error finding container 3f74bd1f111bda1f2ed880773fb2ef89a7d5c1f560bc57f8d94dcd43ab5b486d: Status 404 returned error can't find the container with id 3f74bd1f111bda1f2ed880773fb2ef89a7d5c1f560bc57f8d94dcd43ab5b486d Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.741579 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.753010 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.838900 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data" (OuterVolumeSpecName: "config-data") pod "6791c442-3e89-4211-b980-e00afa59d6c1" (UID: "6791c442-3e89-4211-b980-e00afa59d6c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.892524 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:43 crc kubenswrapper[4857]: I0318 14:27:43.931425 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6791c442-3e89-4211-b980-e00afa59d6c1-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.071598 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-758d4bf778-sxwcw"] Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.099145 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b8d49d4dc-q2jgf"] Mar 18 14:27:44 crc kubenswrapper[4857]: W0318 14:27:44.142837 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40d7e3cc_c623_483b_bbd0_f88a2246cf7b.slice/crio-97c2b19115e336f04206d02101163a9fec4e59e2f35ee7ab6541a0e9be1ee70e WatchSource:0}: Error finding container 97c2b19115e336f04206d02101163a9fec4e59e2f35ee7ab6541a0e9be1ee70e: Status 404 returned error can't find the container with id 97c2b19115e336f04206d02101163a9fec4e59e2f35ee7ab6541a0e9be1ee70e Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.315083 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmg7v" event={"ID":"6791c442-3e89-4211-b980-e00afa59d6c1","Type":"ContainerDied","Data":"519b3f5265e17f38cb7bf63094aede0ebe9f5acecdad2756ac5810459ddc842e"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.315129 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="519b3f5265e17f38cb7bf63094aede0ebe9f5acecdad2756ac5810459ddc842e" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.315214 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmg7v" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.319417 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" event={"ID":"29b88825-f24d-4344-a47f-0f04a9726730","Type":"ContainerStarted","Data":"1b20fc6d9505062be98a6be5461388e779c75936bf7adf56fb8d8e3c0e2473f8"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.322714 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-758d4bf778-sxwcw" event={"ID":"40d7e3cc-c623-483b-bbd0-f88a2246cf7b","Type":"ContainerStarted","Data":"97c2b19115e336f04206d02101163a9fec4e59e2f35ee7ab6541a0e9be1ee70e"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.328237 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerStarted","Data":"4efab212dbec6d1792b83b874ffdc474599f3dff9ce5f1e1b1ae912ae6993a65"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.336830 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" event={"ID":"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7","Type":"ContainerStarted","Data":"07ce43d5da9b6ebdae5645500481a6c19fb48384036dfbdc2674773947909296"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.349565 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bddf5f585-25djb" event={"ID":"d94f0649-a747-48de-bb74-4db5047cf5d5","Type":"ContainerStarted","Data":"3f74bd1f111bda1f2ed880773fb2ef89a7d5c1f560bc57f8d94dcd43ab5b486d"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.349737 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.358717 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerStarted","Data":"746d04b39a2cc14a741a17263e09b96d6d7e89da84dbf8a6e419cf80f049f449"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.362183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerStarted","Data":"929df1992cf0fc69308df086ad805f057fc3e19380f1aa24b71078b3d89de0ac"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.362226 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerStarted","Data":"252c4e58550cf5b3f7e8fd0b7d9b61587ca003ba59c100fc33b795c0b12d82b4"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.362357 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.362405 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.373029 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" event={"ID":"3cc875e0-0e5b-446b-8836-5c8b3ceb9736","Type":"ContainerStarted","Data":"562f9cb359303d1897b01333d016af08c7330658597b44ef1ef112670f37ad20"} Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.393313 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6bddf5f585-25djb" podStartSLOduration=10.393270999 podStartE2EDuration="10.393270999s" podCreationTimestamp="2026-03-18 14:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:44.373961373 +0000 UTC m=+1648.503089820" watchObservedRunningTime="2026-03-18 14:27:44.393270999 +0000 UTC m=+1648.522399456" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.501155 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9554cfcb4-bkg8z" podStartSLOduration=30.501127533000002 podStartE2EDuration="30.501127533s" podCreationTimestamp="2026-03-18 14:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:44.408188394 +0000 UTC m=+1648.537316851" watchObservedRunningTime="2026-03-18 14:27:44.501127533 +0000 UTC m=+1648.630255980" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.881626 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:27:44 crc kubenswrapper[4857]: E0318 14:27:44.882237 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="extract-utilities" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882259 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="extract-utilities" Mar 18 14:27:44 crc kubenswrapper[4857]: E0318 14:27:44.882287 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="extract-content" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882294 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="extract-content" Mar 18 14:27:44 crc kubenswrapper[4857]: E0318 14:27:44.882308 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" containerName="cinder-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882315 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" containerName="cinder-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: E0318 14:27:44.882335 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" containerName="heat-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882341 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" containerName="heat-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: E0318 14:27:44.882363 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882369 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882594 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" containerName="heat-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882623 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" containerName="cinder-db-sync" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.882636 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bc8846-fa5e-4a90-af94-4b44e6bde172" containerName="registry-server" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.888536 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.891640 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.896294 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pf4wf" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.896893 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.901052 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 18 14:27:44 crc kubenswrapper[4857]: I0318 14:27:44.922223 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.065227 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.065745 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.065907 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.065947 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.066075 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.066290 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfw25\" (UniqueName: \"kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.152836 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169453 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169625 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169783 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfw25\" (UniqueName: \"kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169814 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169901 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.169924 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.170211 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.203050 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.203461 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.204815 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.209548 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfw25\" (UniqueName: \"kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.211218 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data\") pod \"cinder-scheduler-0\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.273393 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.348151 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.360011 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.389498 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396147 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396291 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396414 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396463 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396589 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396672 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7b6m\" (UniqueName: \"kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.396904 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.432313 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.438058 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.456318 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.470847 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bddf5f585-25djb" event={"ID":"d94f0649-a747-48de-bb74-4db5047cf5d5","Type":"ContainerStarted","Data":"66a6febc0bd144046b7523f9662d0403300277ac226ca988b0bada78c96e648c"} Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.474253 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.497678 4857 generic.go:334] "Generic (PLEG): container finished" podID="29b88825-f24d-4344-a47f-0f04a9726730" containerID="dbc7db0f09d6b9a1cf1cef39d8623e010d192c899f9608dda698e4d018b4b2d6" exitCode=0 Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.497812 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" event={"ID":"29b88825-f24d-4344-a47f-0f04a9726730","Type":"ContainerDied","Data":"dbc7db0f09d6b9a1cf1cef39d8623e010d192c899f9608dda698e4d018b4b2d6"} Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.499993 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcxl\" (UniqueName: \"kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.500105 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.504933 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505011 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505134 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505162 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505200 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505238 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505349 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505384 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7b6m\" (UniqueName: \"kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505448 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505491 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505650 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.505772 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.506162 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.517129 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.518053 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.524329 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.533514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.542199 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7b6m\" (UniqueName: \"kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m\") pod \"cinder-api-0\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.557039 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerStarted","Data":"13e97e0395d574918d8a9ac9bf9dca174573f3dbb33ea39591085a7c8b1ab78a"} Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.557391 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerStarted","Data":"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30"} Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.557486 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.557569 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608080 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608154 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608265 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcxl\" (UniqueName: \"kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608434 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608513 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.608552 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.609486 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.613555 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.614811 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.617336 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.618186 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.653515 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcxl\" (UniqueName: \"kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl\") pod \"dnsmasq-dns-5c9776ccc5-jcn5v\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.789440 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:27:45 crc kubenswrapper[4857]: I0318 14:27:45.790426 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:46 crc kubenswrapper[4857]: I0318 14:27:46.572280 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerID="13e97e0395d574918d8a9ac9bf9dca174573f3dbb33ea39591085a7c8b1ab78a" exitCode=1 Mar 18 14:27:46 crc kubenswrapper[4857]: I0318 14:27:46.573377 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"13e97e0395d574918d8a9ac9bf9dca174573f3dbb33ea39591085a7c8b1ab78a"} Mar 18 14:27:46 crc kubenswrapper[4857]: I0318 14:27:46.574613 4857 scope.go:117] "RemoveContainer" containerID="13e97e0395d574918d8a9ac9bf9dca174573f3dbb33ea39591085a7c8b1ab78a" Mar 18 14:27:46 crc kubenswrapper[4857]: I0318 14:27:46.582510 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-758d4bf778-sxwcw" event={"ID":"40d7e3cc-c623-483b-bbd0-f88a2246cf7b","Type":"ContainerStarted","Data":"a793cd272a4d5b8ed0f6d1a4b35f916888423e00d3a0fa5927ad51a2b06344c6"} Mar 18 14:27:47 crc kubenswrapper[4857]: I0318 14:27:47.421672 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:27:47 crc kubenswrapper[4857]: I0318 14:27:47.621562 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:27:47 crc kubenswrapper[4857]: I0318 14:27:47.737460 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:27:48 crc kubenswrapper[4857]: I0318 14:27:48.040969 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:48 crc kubenswrapper[4857]: I0318 14:27:48.708554 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerStarted","Data":"c5833a82245be25182e8f7aac3bf49b953bd4ccb487ecc307492959b006a2a3e"} Mar 18 14:27:48 crc kubenswrapper[4857]: I0318 14:27:48.735329 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerStarted","Data":"030ef56ee88b5b5b8dd78c09becbc26133c55d6b3d32f222f26394506a498783"} Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.436066 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:27:49 crc kubenswrapper[4857]: W0318 14:27:49.441133 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2128d65a_6594_4f94_89be_6a552d89bf98.slice/crio-d78f3931be8a48e57fa7f1a7269d05f8b2dbc75522b59a098f70f6fc97accb25 WatchSource:0}: Error finding container d78f3931be8a48e57fa7f1a7269d05f8b2dbc75522b59a098f70f6fc97accb25: Status 404 returned error can't find the container with id d78f3931be8a48e57fa7f1a7269d05f8b2dbc75522b59a098f70f6fc97accb25 Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.763231 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-758d4bf778-sxwcw" event={"ID":"40d7e3cc-c623-483b-bbd0-f88a2246cf7b","Type":"ContainerStarted","Data":"6a039c0e5357242c1cdf11b1dd3efe330a3aa7dbe2d87e8a55fb53df8b4e999a"} Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.764803 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.764930 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.770801 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" event={"ID":"2128d65a-6594-4f94-89be-6a552d89bf98","Type":"ContainerStarted","Data":"d78f3931be8a48e57fa7f1a7269d05f8b2dbc75522b59a098f70f6fc97accb25"} Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.778654 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" event={"ID":"29b88825-f24d-4344-a47f-0f04a9726730","Type":"ContainerStarted","Data":"f986cf71540bf01e28c5408375bf309b9d16bba40b3156f3602d6e66ae7c4cd5"} Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.778879 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="dnsmasq-dns" containerID="cri-o://f986cf71540bf01e28c5408375bf309b9d16bba40b3156f3602d6e66ae7c4cd5" gracePeriod=10 Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.779217 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.786104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerStarted","Data":"238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a"} Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.786590 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.796801 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.797486 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-758d4bf778-sxwcw" podStartSLOduration=10.797471404 podStartE2EDuration="10.797471404s" podCreationTimestamp="2026-03-18 14:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:49.78538771 +0000 UTC m=+1653.914516167" watchObservedRunningTime="2026-03-18 14:27:49.797471404 +0000 UTC m=+1653.926599861" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.827647 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" podStartSLOduration=14.827624533 podStartE2EDuration="14.827624533s" podCreationTimestamp="2026-03-18 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:49.823320645 +0000 UTC m=+1653.952449102" watchObservedRunningTime="2026-03-18 14:27:49.827624533 +0000 UTC m=+1653.956752990" Mar 18 14:27:49 crc kubenswrapper[4857]: I0318 14:27:49.850001 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-786dc49864-sjmlm" podStartSLOduration=14.849970035 podStartE2EDuration="14.849970035s" podCreationTimestamp="2026-03-18 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:45.611801023 +0000 UTC m=+1649.740929480" watchObservedRunningTime="2026-03-18 14:27:49.849970035 +0000 UTC m=+1653.979098492" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.018668 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerStarted","Data":"838e756aaf3234086b02230c39627295d1241dbf473384e99f7ca2cc5570a614"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.035647 4857 generic.go:334] "Generic (PLEG): container finished" podID="2128d65a-6594-4f94-89be-6a552d89bf98" containerID="97a80ca89af75c970fec73c950de8e936c2041dfbb7b78f61bf66e3224352040" exitCode=0 Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.035723 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" event={"ID":"2128d65a-6594-4f94-89be-6a552d89bf98","Type":"ContainerDied","Data":"97a80ca89af75c970fec73c950de8e936c2041dfbb7b78f61bf66e3224352040"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.046349 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.046629 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.079006 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.088305 4857 generic.go:334] "Generic (PLEG): container finished" podID="29b88825-f24d-4344-a47f-0f04a9726730" containerID="f986cf71540bf01e28c5408375bf309b9d16bba40b3156f3602d6e66ae7c4cd5" exitCode=0 Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.113978 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" event={"ID":"29b88825-f24d-4344-a47f-0f04a9726730","Type":"ContainerDied","Data":"f986cf71540bf01e28c5408375bf309b9d16bba40b3156f3602d6e66ae7c4cd5"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.134884 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" exitCode=1 Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.135029 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.135106 4857 scope.go:117] "RemoveContainer" containerID="13e97e0395d574918d8a9ac9bf9dca174573f3dbb33ea39591085a7c8b1ab78a" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.137055 4857 scope.go:117] "RemoveContainer" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" Mar 18 14:27:51 crc kubenswrapper[4857]: E0318 14:27:51.137770 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-786dc49864-sjmlm_openstack(2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba)\"" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.138331 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.197698 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.296967 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" podStartSLOduration=11.811883844 podStartE2EDuration="16.296941948s" podCreationTimestamp="2026-03-18 14:27:35 +0000 UTC" firstStartedPulling="2026-03-18 14:27:43.738996615 +0000 UTC m=+1647.868125072" lastFinishedPulling="2026-03-18 14:27:48.224054719 +0000 UTC m=+1652.353183176" observedRunningTime="2026-03-18 14:27:51.253121006 +0000 UTC m=+1655.382249463" watchObservedRunningTime="2026-03-18 14:27:51.296941948 +0000 UTC m=+1655.426070405" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311409 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311465 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsz7z\" (UniqueName: \"kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311485 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311718 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311852 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.311877 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.380206 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerStarted","Data":"bd71a21f461112152b66f0e73a082dba0abee0280f2ea72bb932abb24c552a2f"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.384773 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" event={"ID":"3cc875e0-0e5b-446b-8836-5c8b3ceb9736","Type":"ContainerStarted","Data":"ea8e5a4380b64efb60ecd241d12c903924a7a20fe5eba5bcd63a7a1ce8851986"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.384948 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" event={"ID":"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7","Type":"ContainerStarted","Data":"6514a5780cb5cc05dc7ce0c966aa921addf080b72de6e1eb3e818fe5278231d1"} Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.395552 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z" (OuterVolumeSpecName: "kube-api-access-rsz7z") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "kube-api-access-rsz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.423994 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsz7z\" (UniqueName: \"kubernetes.io/projected/29b88825-f24d-4344-a47f-0f04a9726730-kube-api-access-rsz7z\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.459229 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.548099 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.548449 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c7548b49f-k8mxj" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-api" containerID="cri-o://fc3b904ffe95113c24fdc6f7fb09c322198c53d1d6a1040d9fdf739939f6a099" gracePeriod=30 Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.551514 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c7548b49f-k8mxj" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" containerID="cri-o://648e2a54151565db1f574654def8c14a4b38c17832f649dfe28132ef526619ab" gracePeriod=30 Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.625801 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bf8cc5fd5-pf2nl"] Mar 18 14:27:51 crc kubenswrapper[4857]: E0318 14:27:51.626461 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="init" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.626485 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="init" Mar 18 14:27:51 crc kubenswrapper[4857]: E0318 14:27:51.626523 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="dnsmasq-dns" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.626534 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="dnsmasq-dns" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.626818 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b88825-f24d-4344-a47f-0f04a9726730" containerName="dnsmasq-dns" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.628212 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.644915 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.653130 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf8cc5fd5-pf2nl"] Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-public-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752633 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-ovndb-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752671 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jwkb\" (UniqueName: \"kubernetes.io/projected/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-kube-api-access-4jwkb\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752707 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-combined-ca-bundle\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752777 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752949 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-httpd-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.752980 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-internal-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.794244 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config" (OuterVolumeSpecName: "config") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.815246 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.831950 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.835480 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.861364 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.862475 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") pod \"29b88825-f24d-4344-a47f-0f04a9726730\" (UID: \"29b88825-f24d-4344-a47f-0f04a9726730\") " Mar 18 14:27:51 crc kubenswrapper[4857]: W0318 14:27:51.862635 4857 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/29b88825-f24d-4344-a47f-0f04a9726730/volumes/kubernetes.io~configmap/dns-svc Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.862650 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29b88825-f24d-4344-a47f-0f04a9726730" (UID: "29b88825-f24d-4344-a47f-0f04a9726730"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.862908 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-httpd-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.862959 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-internal-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863071 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-public-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863092 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-ovndb-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863122 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jwkb\" (UniqueName: \"kubernetes.io/projected/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-kube-api-access-4jwkb\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863159 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-combined-ca-bundle\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863204 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863380 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863396 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863406 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863416 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.863425 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29b88825-f24d-4344-a47f-0f04a9726730-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.868083 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-public-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.868858 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.872618 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-ovndb-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.878247 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-internal-tls-certs\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.886247 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-httpd-config\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.894891 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-combined-ca-bundle\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:51 crc kubenswrapper[4857]: I0318 14:27:51.895070 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jwkb\" (UniqueName: \"kubernetes.io/projected/de9f5a39-f6e4-496d-9a40-a8b8716eaa57-kube-api-access-4jwkb\") pod \"neutron-6bf8cc5fd5-pf2nl\" (UID: \"de9f5a39-f6e4-496d-9a40-a8b8716eaa57\") " pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.076494 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.264036 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7cc94898c8-q6kp6" event={"ID":"3cc875e0-0e5b-446b-8836-5c8b3ceb9736","Type":"ContainerStarted","Data":"28578f7eba27f8e3f13f0c2bbb517cbfef49f550ca8545c0ffbab9dcd3ec6e60"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.266794 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" event={"ID":"cec7fb8b-0248-4c9b-ba87-9d0840a07ce7","Type":"ContainerStarted","Data":"9876a64fce13b4501e5999d89a0f368727514e6e82cfd5c813f91de9b3feabfd"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.271183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerStarted","Data":"b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.271336 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener-log" containerID="cri-o://838e756aaf3234086b02230c39627295d1241dbf473384e99f7ca2cc5570a614" gracePeriod=30 Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.271567 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener" containerID="cri-o://b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a" gracePeriod=30 Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.289172 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" event={"ID":"29b88825-f24d-4344-a47f-0f04a9726730","Type":"ContainerDied","Data":"1b20fc6d9505062be98a6be5461388e779c75936bf7adf56fb8d8e3c0e2473f8"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.289247 4857 scope.go:117] "RemoveContainer" containerID="f986cf71540bf01e28c5408375bf309b9d16bba40b3156f3602d6e66ae7c4cd5" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.289431 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ql88" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.296452 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b8d49d4dc-q2jgf" podStartSLOduration=12.120076718 podStartE2EDuration="17.296424729s" podCreationTimestamp="2026-03-18 14:27:35 +0000 UTC" firstStartedPulling="2026-03-18 14:27:44.205667568 +0000 UTC m=+1648.334796025" lastFinishedPulling="2026-03-18 14:27:49.382015579 +0000 UTC m=+1653.511144036" observedRunningTime="2026-03-18 14:27:52.292398328 +0000 UTC m=+1656.421526795" watchObservedRunningTime="2026-03-18 14:27:52.296424729 +0000 UTC m=+1656.425553186" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.306281 4857 generic.go:334] "Generic (PLEG): container finished" podID="d4891b36-5848-4530-9506-fcc9ee28f279" containerID="648e2a54151565db1f574654def8c14a4b38c17832f649dfe28132ef526619ab" exitCode=0 Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.306362 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerDied","Data":"648e2a54151565db1f574654def8c14a4b38c17832f649dfe28132ef526619ab"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.323180 4857 scope.go:117] "RemoveContainer" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" Mar 18 14:27:52 crc kubenswrapper[4857]: E0318 14:27:52.323551 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-786dc49864-sjmlm_openstack(2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba)\"" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.324239 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.336115 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" podStartSLOduration=13.17975325 podStartE2EDuration="18.336080817s" podCreationTimestamp="2026-03-18 14:27:34 +0000 UTC" firstStartedPulling="2026-03-18 14:27:43.007467496 +0000 UTC m=+1647.136595953" lastFinishedPulling="2026-03-18 14:27:48.163795063 +0000 UTC m=+1652.292923520" observedRunningTime="2026-03-18 14:27:52.321208563 +0000 UTC m=+1656.450337020" watchObservedRunningTime="2026-03-18 14:27:52.336080817 +0000 UTC m=+1656.465209274" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.342555 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerStarted","Data":"12ba4cc9b33a1047d5091f62eabbf8f05f73ec2439592588b995457cf06505fa"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.376287 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.376806 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerStarted","Data":"d97d85beed3edd6694dfa3d925b0126520cfc8ce4924edd3982cf57eaa63aebd"} Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.379377 4857 scope.go:117] "RemoveContainer" containerID="dbc7db0f09d6b9a1cf1cef39d8623e010d192c899f9608dda698e4d018b4b2d6" Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.380953 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.406167 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.504399 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ql88"] Mar 18 14:27:52 crc kubenswrapper[4857]: I0318 14:27:52.573958 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f5657b887-87l4t" podStartSLOduration=13.483344911 podStartE2EDuration="18.573729878s" podCreationTimestamp="2026-03-18 14:27:34 +0000 UTC" firstStartedPulling="2026-03-18 14:27:43.909196938 +0000 UTC m=+1648.038325395" lastFinishedPulling="2026-03-18 14:27:48.999581905 +0000 UTC m=+1653.128710362" observedRunningTime="2026-03-18 14:27:52.399289928 +0000 UTC m=+1656.528418385" watchObservedRunningTime="2026-03-18 14:27:52.573729878 +0000 UTC m=+1656.702858335" Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.191856 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29b88825-f24d-4344-a47f-0f04a9726730" path="/var/lib/kubelet/pods/29b88825-f24d-4344-a47f-0f04a9726730/volumes" Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.193906 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf8cc5fd5-pf2nl"] Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.549004 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerStarted","Data":"e9f5f32a7cff1e1d677cd5eec47c2d2667cba16c41b31530969d13a4ecaaced9"} Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.549231 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api-log" containerID="cri-o://12ba4cc9b33a1047d5091f62eabbf8f05f73ec2439592588b995457cf06505fa" gracePeriod=30 Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.549620 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.550098 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" containerID="cri-o://e9f5f32a7cff1e1d677cd5eec47c2d2667cba16c41b31530969d13a4ecaaced9" gracePeriod=30 Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.561041 4857 generic.go:334] "Generic (PLEG): container finished" podID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerID="838e756aaf3234086b02230c39627295d1241dbf473384e99f7ca2cc5570a614" exitCode=143 Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.561152 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerDied","Data":"838e756aaf3234086b02230c39627295d1241dbf473384e99f7ca2cc5570a614"} Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.591312 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf8cc5fd5-pf2nl" event={"ID":"de9f5a39-f6e4-496d-9a40-a8b8716eaa57","Type":"ContainerStarted","Data":"8435d77ea0926ddfe1856d3a6b8fb7a46f952681eca828484f9a0bfea3819729"} Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.610347 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.610310213 podStartE2EDuration="8.610310213s" podCreationTimestamp="2026-03-18 14:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:53.587235592 +0000 UTC m=+1657.716364049" watchObservedRunningTime="2026-03-18 14:27:53.610310213 +0000 UTC m=+1657.739438680" Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.614490 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" event={"ID":"2128d65a-6594-4f94-89be-6a552d89bf98","Type":"ContainerStarted","Data":"36d50ea2228dadb5ab7fc4856e2ed844aba4a550acf02b4adfd6b9f2f0a5ecd6"} Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.616926 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.650838 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerStarted","Data":"fd3ab12ca52b718be2098439dc1856ebd2075fa0319ccb7c1c2afaff7648a91a"} Mar 18 14:27:53 crc kubenswrapper[4857]: I0318 14:27:53.711457 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" podStartSLOduration=8.711433378 podStartE2EDuration="8.711433378s" podCreationTimestamp="2026-03-18 14:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:53.67018582 +0000 UTC m=+1657.799314287" watchObservedRunningTime="2026-03-18 14:27:53.711433378 +0000 UTC m=+1657.840561835" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.039311 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.040014 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.040824 4857 scope.go:117] "RemoveContainer" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.040953 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:54 crc kubenswrapper[4857]: E0318 14:27:54.041144 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=barbican-api pod=barbican-api-786dc49864-sjmlm_openstack(2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba)\"" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.777134 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerStarted","Data":"4eb321dc40f56e9e35b00e90b33a686fa10c1b8c2937c7f84f71f7ccd33ede1f"} Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.803987 4857 generic.go:334] "Generic (PLEG): container finished" podID="d4891b36-5848-4530-9506-fcc9ee28f279" containerID="fc3b904ffe95113c24fdc6f7fb09c322198c53d1d6a1040d9fdf739939f6a099" exitCode=0 Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.804104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerDied","Data":"fc3b904ffe95113c24fdc6f7fb09c322198c53d1d6a1040d9fdf739939f6a099"} Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.835262 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.605892161 podStartE2EDuration="10.835240228s" podCreationTimestamp="2026-03-18 14:27:44 +0000 UTC" firstStartedPulling="2026-03-18 14:27:48.989807119 +0000 UTC m=+1653.118935576" lastFinishedPulling="2026-03-18 14:27:50.219155186 +0000 UTC m=+1654.348283643" observedRunningTime="2026-03-18 14:27:54.818435376 +0000 UTC m=+1658.947563833" watchObservedRunningTime="2026-03-18 14:27:54.835240228 +0000 UTC m=+1658.964368685" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.841160 4857 generic.go:334] "Generic (PLEG): container finished" podID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerID="12ba4cc9b33a1047d5091f62eabbf8f05f73ec2439592588b995457cf06505fa" exitCode=143 Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.841281 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerDied","Data":"12ba4cc9b33a1047d5091f62eabbf8f05f73ec2439592588b995457cf06505fa"} Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.852013 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf8cc5fd5-pf2nl" event={"ID":"de9f5a39-f6e4-496d-9a40-a8b8716eaa57","Type":"ContainerStarted","Data":"2c7e0b51ea6add2a6ab8892012bb7bcaa36befe457654aa79f9b2394ea0a6417"} Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.852063 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf8cc5fd5-pf2nl" event={"ID":"de9f5a39-f6e4-496d-9a40-a8b8716eaa57","Type":"ContainerStarted","Data":"f4e8870c03da2362948716683c6521b83f594c15c2f34271a0d5d0daa3714a35"} Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.852154 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f5657b887-87l4t" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker-log" containerID="cri-o://bd71a21f461112152b66f0e73a082dba0abee0280f2ea72bb932abb24c552a2f" gracePeriod=30 Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.852408 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f5657b887-87l4t" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker" containerID="cri-o://d97d85beed3edd6694dfa3d925b0126520cfc8ce4924edd3982cf57eaa63aebd" gracePeriod=30 Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.896955 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bf8cc5fd5-pf2nl" podStartSLOduration=3.8969136989999997 podStartE2EDuration="3.896913699s" podCreationTimestamp="2026-03-18 14:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:27:54.889536994 +0000 UTC m=+1659.018665441" watchObservedRunningTime="2026-03-18 14:27:54.896913699 +0000 UTC m=+1659.026042176" Mar 18 14:27:54 crc kubenswrapper[4857]: I0318 14:27:54.897274 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.007962 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.135734 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm62z\" (UniqueName: \"kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.135910 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.136014 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.136036 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.136083 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.136161 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.136198 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs\") pod \"d4891b36-5848-4530-9506-fcc9ee28f279\" (UID: \"d4891b36-5848-4530-9506-fcc9ee28f279\") " Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.162145 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.163614 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z" (OuterVolumeSpecName: "kube-api-access-rm62z") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "kube-api-access-rm62z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.263842 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config" (OuterVolumeSpecName: "config") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.267914 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.275869 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.281426 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm62z\" (UniqueName: \"kubernetes.io/projected/d4891b36-5848-4530-9506-fcc9ee28f279-kube-api-access-rm62z\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.281467 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.348967 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.377062 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.378473 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4891b36-5848-4530-9506-fcc9ee28f279" (UID: "d4891b36-5848-4530-9506-fcc9ee28f279"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.384830 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.384863 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.384876 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.384885 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.384894 4857 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4891b36-5848-4530-9506-fcc9ee28f279-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.890185 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7548b49f-k8mxj" event={"ID":"d4891b36-5848-4530-9506-fcc9ee28f279","Type":"ContainerDied","Data":"7e2a8d2edabd8dc5677686bda46cc97c3fb60f5ac1e7909ed1c4895ce8ea350d"} Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.890265 4857 scope.go:117] "RemoveContainer" containerID="648e2a54151565db1f574654def8c14a4b38c17832f649dfe28132ef526619ab" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.890468 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7548b49f-k8mxj" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.905112 4857 generic.go:334] "Generic (PLEG): container finished" podID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerID="d97d85beed3edd6694dfa3d925b0126520cfc8ce4924edd3982cf57eaa63aebd" exitCode=0 Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.905140 4857 generic.go:334] "Generic (PLEG): container finished" podID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerID="bd71a21f461112152b66f0e73a082dba0abee0280f2ea72bb932abb24c552a2f" exitCode=143 Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.906398 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerDied","Data":"d97d85beed3edd6694dfa3d925b0126520cfc8ce4924edd3982cf57eaa63aebd"} Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.906442 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerDied","Data":"bd71a21f461112152b66f0e73a082dba0abee0280f2ea72bb932abb24c552a2f"} Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.907850 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.982022 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:27:55 crc kubenswrapper[4857]: I0318 14:27:55.996884 4857 scope.go:117] "RemoveContainer" containerID="fc3b904ffe95113c24fdc6f7fb09c322198c53d1d6a1040d9fdf739939f6a099" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.038992 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.074783 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c7548b49f-k8mxj"] Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.371469 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.424812 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom\") pod \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.424901 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle\") pod \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.424936 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs\") pod \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.425007 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data\") pod \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.425236 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwwj2\" (UniqueName: \"kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2\") pod \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\" (UID: \"bb5b081a-d64c-4015-a17c-4ebf0f194f32\") " Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.427208 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs" (OuterVolumeSpecName: "logs") pod "bb5b081a-d64c-4015-a17c-4ebf0f194f32" (UID: "bb5b081a-d64c-4015-a17c-4ebf0f194f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.439884 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb5b081a-d64c-4015-a17c-4ebf0f194f32" (UID: "bb5b081a-d64c-4015-a17c-4ebf0f194f32"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.465060 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2" (OuterVolumeSpecName: "kube-api-access-rwwj2") pod "bb5b081a-d64c-4015-a17c-4ebf0f194f32" (UID: "bb5b081a-d64c-4015-a17c-4ebf0f194f32"). InnerVolumeSpecName "kube-api-access-rwwj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.530085 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwwj2\" (UniqueName: \"kubernetes.io/projected/bb5b081a-d64c-4015-a17c-4ebf0f194f32-kube-api-access-rwwj2\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.530122 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.530132 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb5b081a-d64c-4015-a17c-4ebf0f194f32-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.551976 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data" (OuterVolumeSpecName: "config-data") pod "bb5b081a-d64c-4015-a17c-4ebf0f194f32" (UID: "bb5b081a-d64c-4015-a17c-4ebf0f194f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.588710 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb5b081a-d64c-4015-a17c-4ebf0f194f32" (UID: "bb5b081a-d64c-4015-a17c-4ebf0f194f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.632641 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.632681 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb5b081a-d64c-4015-a17c-4ebf0f194f32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.968718 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5657b887-87l4t" Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.976272 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5657b887-87l4t" event={"ID":"bb5b081a-d64c-4015-a17c-4ebf0f194f32","Type":"ContainerDied","Data":"4efab212dbec6d1792b83b874ffdc474599f3dff9ce5f1e1b1ae912ae6993a65"} Mar 18 14:27:56 crc kubenswrapper[4857]: I0318 14:27:56.976350 4857 scope.go:117] "RemoveContainer" containerID="d97d85beed3edd6694dfa3d925b0126520cfc8ce4924edd3982cf57eaa63aebd" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.021611 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.032144 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-f5657b887-87l4t"] Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.040402 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.040457 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.040687 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.041455 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.041510 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" gracePeriod=600 Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.042300 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.042336 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.042933 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api-log" containerStatusID={"Type":"cri-o","ID":"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30"} pod="openstack/barbican-api-786dc49864-sjmlm" containerMessage="Container barbican-api-log failed liveness probe, will be restarted" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.042958 4857 scope.go:117] "RemoveContainer" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.042977 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" containerID="cri-o://96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30" gracePeriod=30 Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.043447 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.184716 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" path="/var/lib/kubelet/pods/bb5b081a-d64c-4015-a17c-4ebf0f194f32/volumes" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.185551 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" path="/var/lib/kubelet/pods/d4891b36-5848-4530-9506-fcc9ee28f279/volumes" Mar 18 14:27:57 crc kubenswrapper[4857]: I0318 14:27:57.900783 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-758d4bf778-sxwcw" Mar 18 14:27:58 crc kubenswrapper[4857]: I0318 14:27:58.003920 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:27:58 crc kubenswrapper[4857]: I0318 14:27:58.047977 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" exitCode=0 Mar 18 14:27:58 crc kubenswrapper[4857]: I0318 14:27:58.048074 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9"} Mar 18 14:27:58 crc kubenswrapper[4857]: I0318 14:27:58.060095 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerID="96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30" exitCode=143 Mar 18 14:27:58 crc kubenswrapper[4857]: I0318 14:27:58.060150 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30"} Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183097 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564068-95p85"] Mar 18 14:28:00 crc kubenswrapper[4857]: E0318 14:28:00.183659 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183677 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" Mar 18 14:28:00 crc kubenswrapper[4857]: E0318 14:28:00.183693 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183699 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker" Mar 18 14:28:00 crc kubenswrapper[4857]: E0318 14:28:00.183714 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker-log" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183721 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker-log" Mar 18 14:28:00 crc kubenswrapper[4857]: E0318 14:28:00.183740 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-api" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183749 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-api" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183970 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-api" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.183993 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker-log" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.184010 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb5b081a-d64c-4015-a17c-4ebf0f194f32" containerName="barbican-worker" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.184026 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4891b36-5848-4530-9506-fcc9ee28f279" containerName="neutron-httpd" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.189302 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.192703 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.192917 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.199343 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.214382 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564068-95p85"] Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.261066 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgng\" (UniqueName: \"kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng\") pod \"auto-csr-approver-29564068-95p85\" (UID: \"b145d97a-a264-4d03-9908-a3957a52ceb0\") " pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.363712 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgng\" (UniqueName: \"kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng\") pod \"auto-csr-approver-29564068-95p85\" (UID: \"b145d97a-a264-4d03-9908-a3957a52ceb0\") " pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.390719 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgng\" (UniqueName: \"kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng\") pod \"auto-csr-approver-29564068-95p85\" (UID: \"b145d97a-a264-4d03-9908-a3957a52ceb0\") " pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.513641 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.533105 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.586899 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.792197 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.933439 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:28:00 crc kubenswrapper[4857]: I0318 14:28:00.933709 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="dnsmasq-dns" containerID="cri-o://522d353ce1ff264438d2ad53b03f0dfc30406e8e6372e294f06957e82e9138f9" gracePeriod=10 Mar 18 14:28:01 crc kubenswrapper[4857]: I0318 14:28:01.040153 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:28:01 crc kubenswrapper[4857]: I0318 14:28:01.114455 4857 generic.go:334] "Generic (PLEG): container finished" podID="eba55967-1b57-413b-a257-4e7894f3b270" containerID="522d353ce1ff264438d2ad53b03f0dfc30406e8e6372e294f06957e82e9138f9" exitCode=0 Mar 18 14:28:01 crc kubenswrapper[4857]: I0318 14:28:01.114522 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" event={"ID":"eba55967-1b57-413b-a257-4e7894f3b270","Type":"ContainerDied","Data":"522d353ce1ff264438d2ad53b03f0dfc30406e8e6372e294f06957e82e9138f9"} Mar 18 14:28:01 crc kubenswrapper[4857]: I0318 14:28:01.114743 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="cinder-scheduler" containerID="cri-o://fd3ab12ca52b718be2098439dc1856ebd2075fa0319ccb7c1c2afaff7648a91a" gracePeriod=30 Mar 18 14:28:01 crc kubenswrapper[4857]: I0318 14:28:01.115157 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="probe" containerID="cri-o://4eb321dc40f56e9e35b00e90b33a686fa10c1b8c2937c7f84f71f7ccd33ede1f" gracePeriod=30 Mar 18 14:28:02 crc kubenswrapper[4857]: I0318 14:28:02.173491 4857 generic.go:334] "Generic (PLEG): container finished" podID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerID="fd3ab12ca52b718be2098439dc1856ebd2075fa0319ccb7c1c2afaff7648a91a" exitCode=0 Mar 18 14:28:02 crc kubenswrapper[4857]: I0318 14:28:02.173684 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerDied","Data":"fd3ab12ca52b718be2098439dc1856ebd2075fa0319ccb7c1c2afaff7648a91a"} Mar 18 14:28:03 crc kubenswrapper[4857]: I0318 14:28:03.206187 4857 generic.go:334] "Generic (PLEG): container finished" podID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerID="4eb321dc40f56e9e35b00e90b33a686fa10c1b8c2937c7f84f71f7ccd33ede1f" exitCode=0 Mar 18 14:28:03 crc kubenswrapper[4857]: I0318 14:28:03.206245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerDied","Data":"4eb321dc40f56e9e35b00e90b33a686fa10c1b8c2937c7f84f71f7ccd33ede1f"} Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.660398 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.672315 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.698304 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.781539 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.781919 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.781967 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfqp\" (UniqueName: \"kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: E0318 14:28:04.873700 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.884651 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfqp\" (UniqueName: \"kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.884788 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.884995 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.885483 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.886213 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:04 crc kubenswrapper[4857]: I0318 14:28:04.905769 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfqp\" (UniqueName: \"kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp\") pod \"redhat-marketplace-qf8qd\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.010292 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.064712 4857 scope.go:117] "RemoveContainer" containerID="bd71a21f461112152b66f0e73a082dba0abee0280f2ea72bb932abb24c552a2f" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.262051 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:28:05 crc kubenswrapper[4857]: E0318 14:28:05.263033 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.300511 4857 scope.go:117] "RemoveContainer" containerID="91a90a144a14eacf348bc7099bee1e1014620034eda456b5565275cbe4bb9d37" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.412015 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.428923 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.616925 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.617289 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.617571 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.617620 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.617727 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.617773 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb\") pod \"eba55967-1b57-413b-a257-4e7894f3b270\" (UID: \"eba55967-1b57-413b-a257-4e7894f3b270\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.631371 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl" (OuterVolumeSpecName: "kube-api-access-2ttsl") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "kube-api-access-2ttsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.743798 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ttsl\" (UniqueName: \"kubernetes.io/projected/eba55967-1b57-413b-a257-4e7894f3b270-kube-api-access-2ttsl\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.745620 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.805324 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.847524 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.858861 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.879052 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.904133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.916299 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config" (OuterVolumeSpecName: "config") pod "eba55967-1b57-413b-a257-4e7894f3b270" (UID: "eba55967-1b57-413b-a257-4e7894f3b270"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.950649 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.950866 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.950900 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.950923 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951003 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfw25\" (UniqueName: \"kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951057 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts\") pod \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\" (UID: \"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914\") " Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951340 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951841 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951860 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951871 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951882 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eba55967-1b57-413b-a257-4e7894f3b270-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.951891 4857 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:05 crc kubenswrapper[4857]: E0318 14:28:05.963279 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="34da3be3-c034-4c63-866c-57097fb5c847" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.964058 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25" (OuterVolumeSpecName: "kube-api-access-jfw25") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "kube-api-access-jfw25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.969371 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:05 crc kubenswrapper[4857]: I0318 14:28:05.980953 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts" (OuterVolumeSpecName: "scripts") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.044266 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.209:9311/healthcheck\": dial tcp 10.217.0.209:9311: connect: connection refused" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.064107 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.064155 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfw25\" (UniqueName: \"kubernetes.io/projected/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-kube-api-access-jfw25\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.064172 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.086911 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.117033 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564068-95p85"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.168925 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.234159 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data" (OuterVolumeSpecName: "config-data") pod "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" (UID: "1db30cff-baf2-4fc4-b1bd-45c9d6d1c914"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.282708 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564068-95p85" event={"ID":"b145d97a-a264-4d03-9908-a3957a52ceb0","Type":"ContainerStarted","Data":"41d2b6226e70a968f955d102c231e9a11530298ff7f939321207dcb24732c6a0"} Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.305267 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.307443 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.323197 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerStarted","Data":"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995"} Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.353148 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" event={"ID":"eba55967-1b57-413b-a257-4e7894f3b270","Type":"ContainerDied","Data":"e58e58f374b87d126da3704bf66d45cf3045b2f35070855f0599dd04b4368c18"} Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.353216 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7k6cz" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.353221 4857 scope.go:117] "RemoveContainer" containerID="522d353ce1ff264438d2ad53b03f0dfc30406e8e6372e294f06957e82e9138f9" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.371285 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerStarted","Data":"04dae7fa0f4734da09c5f78dbaa4a07150e01d6097d4583160394fd1df2e8009"} Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.371351 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="ceilometer-notification-agent" containerID="cri-o://a49e274f85f26e83022028c2708ba9020d8e37163cbca7a1af94a7a5026e4e76" gracePeriod=30 Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.371444 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.371457 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="proxy-httpd" containerID="cri-o://04dae7fa0f4734da09c5f78dbaa4a07150e01d6097d4583160394fd1df2e8009" gracePeriod=30 Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.384799 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1db30cff-baf2-4fc4-b1bd-45c9d6d1c914","Type":"ContainerDied","Data":"030ef56ee88b5b5b8dd78c09becbc26133c55d6b3d32f222f26394506a498783"} Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.384916 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.450411 4857 scope.go:117] "RemoveContainer" containerID="23e6da3aa0ca612354d1c7e6bdac23fffe187eb96ea7631624fe79b49e2afa0e" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.486153 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.510624 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.527378 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:06 crc kubenswrapper[4857]: E0318 14:28:06.528064 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="init" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528086 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="init" Mar 18 14:28:06 crc kubenswrapper[4857]: E0318 14:28:06.528116 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="dnsmasq-dns" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528122 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="dnsmasq-dns" Mar 18 14:28:06 crc kubenswrapper[4857]: E0318 14:28:06.528136 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="probe" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528142 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="probe" Mar 18 14:28:06 crc kubenswrapper[4857]: E0318 14:28:06.528164 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="cinder-scheduler" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528170 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="cinder-scheduler" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528445 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba55967-1b57-413b-a257-4e7894f3b270" containerName="dnsmasq-dns" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528461 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="probe" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.528488 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" containerName="cinder-scheduler" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.530066 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.538014 4857 scope.go:117] "RemoveContainer" containerID="4eb321dc40f56e9e35b00e90b33a686fa10c1b8c2937c7f84f71f7ccd33ede1f" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.552167 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.562073 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.589802 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7k6cz"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.601020 4857 scope.go:117] "RemoveContainer" containerID="fd3ab12ca52b718be2098439dc1856ebd2075fa0319ccb7c1c2afaff7648a91a" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.605674 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.625620 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.625974 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.626166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.626226 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.626289 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk5lf\" (UniqueName: \"kubernetes.io/projected/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-kube-api-access-vk5lf\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.626347 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.729949 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730117 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730179 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730239 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk5lf\" (UniqueName: \"kubernetes.io/projected/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-kube-api-access-vk5lf\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730293 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730374 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.730454 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.738250 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.738386 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.739185 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.741135 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.754858 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk5lf\" (UniqueName: \"kubernetes.io/projected/f2dbb697-87e8-4c7f-bf29-a918e84fd78e-kube-api-access-vk5lf\") pod \"cinder-scheduler-0\" (UID: \"f2dbb697-87e8-4c7f-bf29-a918e84fd78e\") " pod="openstack/cinder-scheduler-0" Mar 18 14:28:06 crc kubenswrapper[4857]: I0318 14:28:06.901102 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.191556 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db30cff-baf2-4fc4-b1bd-45c9d6d1c914" path="/var/lib/kubelet/pods/1db30cff-baf2-4fc4-b1bd-45c9d6d1c914/volumes" Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.196541 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba55967-1b57-413b-a257-4e7894f3b270" path="/var/lib/kubelet/pods/eba55967-1b57-413b-a257-4e7894f3b270/volumes" Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.408943 4857 generic.go:334] "Generic (PLEG): container finished" podID="34da3be3-c034-4c63-866c-57097fb5c847" containerID="04dae7fa0f4734da09c5f78dbaa4a07150e01d6097d4583160394fd1df2e8009" exitCode=0 Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.409462 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerDied","Data":"04dae7fa0f4734da09c5f78dbaa4a07150e01d6097d4583160394fd1df2e8009"} Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.417860 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerID="9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c" exitCode=1 Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.417973 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c"} Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.418026 4857 scope.go:117] "RemoveContainer" containerID="238e5fe5ce79196d9fad136f37d3a856207d2e793fc99de6ff9002e9d30f623a" Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.418125 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-786dc49864-sjmlm" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" containerID="cri-o://2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995" gracePeriod=30 Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.418229 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.421313 4857 generic.go:334] "Generic (PLEG): container finished" podID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerID="d7e68e146169b01fa85ba9070b9bae18c38395c8b1aef2d271cd685721718a09" exitCode=0 Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.421361 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerDied","Data":"d7e68e146169b01fa85ba9070b9bae18c38395c8b1aef2d271cd685721718a09"} Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.421390 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerStarted","Data":"6e4f466090d396b644d4a35f05ad69165f0adfa6fbf5a0ba58ff68ad37562924"} Mar 18 14:28:07 crc kubenswrapper[4857]: I0318 14:28:07.520492 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.292208 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.389936 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom\") pod \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.390400 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs\") pod \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.390539 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fsfz\" (UniqueName: \"kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz\") pod \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.390694 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data\") pod \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.390743 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle\") pod \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\" (UID: \"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.390947 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs" (OuterVolumeSpecName: "logs") pod "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" (UID: "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.391385 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.395914 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz" (OuterVolumeSpecName: "kube-api-access-5fsfz") pod "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" (UID: "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba"). InnerVolumeSpecName "kube-api-access-5fsfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.396997 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" (UID: "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.452767 4857 generic.go:334] "Generic (PLEG): container finished" podID="34da3be3-c034-4c63-866c-57097fb5c847" containerID="a49e274f85f26e83022028c2708ba9020d8e37163cbca7a1af94a7a5026e4e76" exitCode=0 Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.452845 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerDied","Data":"a49e274f85f26e83022028c2708ba9020d8e37163cbca7a1af94a7a5026e4e76"} Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.457913 4857 generic.go:334] "Generic (PLEG): container finished" podID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerID="2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995" exitCode=143 Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.458002 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995"} Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.458043 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-786dc49864-sjmlm" event={"ID":"2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba","Type":"ContainerDied","Data":"746d04b39a2cc14a741a17263e09b96d6d7e89da84dbf8a6e419cf80f049f449"} Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.458069 4857 scope.go:117] "RemoveContainer" containerID="9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.458229 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-786dc49864-sjmlm" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.464246 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" (UID: "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.493950 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fsfz\" (UniqueName: \"kubernetes.io/projected/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-kube-api-access-5fsfz\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.493979 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.493989 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.502707 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2dbb697-87e8-4c7f-bf29-a918e84fd78e","Type":"ContainerStarted","Data":"ea6b1a6a3949f3181630497b0b3a6c67192ddf2de03a580ba1f7f074658e9f38"} Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.511512 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data" (OuterVolumeSpecName: "config-data") pod "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" (UID: "2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.525821 4857 scope.go:117] "RemoveContainer" containerID="2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.596552 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.673885 4857 scope.go:117] "RemoveContainer" containerID="96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.704862 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.722694 4857 scope.go:117] "RemoveContainer" containerID="9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c" Mar 18 14:28:08 crc kubenswrapper[4857]: E0318 14:28:08.726278 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c\": container with ID starting with 9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c not found: ID does not exist" containerID="9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.726353 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c"} err="failed to get container status \"9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c\": rpc error: code = NotFound desc = could not find container \"9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c\": container with ID starting with 9b1acaee92aad3409cb1e85cd91b638584f7852e2492403d023172fd1d1f837c not found: ID does not exist" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.726409 4857 scope.go:117] "RemoveContainer" containerID="2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995" Mar 18 14:28:08 crc kubenswrapper[4857]: E0318 14:28:08.728797 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995\": container with ID starting with 2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995 not found: ID does not exist" containerID="2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.728853 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995"} err="failed to get container status \"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995\": rpc error: code = NotFound desc = could not find container \"2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995\": container with ID starting with 2f6bdd72706447ffc33dfa35f2e3b27ae854205ec92eb850c300128cf28dc995 not found: ID does not exist" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.728886 4857 scope.go:117] "RemoveContainer" containerID="96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30" Mar 18 14:28:08 crc kubenswrapper[4857]: E0318 14:28:08.731006 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30\": container with ID starting with 96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30 not found: ID does not exist" containerID="96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.731036 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30"} err="failed to get container status \"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30\": rpc error: code = NotFound desc = could not find container \"96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30\": container with ID starting with 96e8127008f0b1f12d6da441f4158bab2db2930b0c7fea2aa01f470761f80d30 not found: ID does not exist" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801200 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4zgx\" (UniqueName: \"kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801446 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801508 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801540 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801583 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801663 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.801713 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.802979 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.803534 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.822181 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.828263 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx" (OuterVolumeSpecName: "kube-api-access-t4zgx") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "kube-api-access-t4zgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.829526 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts" (OuterVolumeSpecName: "scripts") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.831595 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.856189 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-786dc49864-sjmlm"] Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.904589 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.904629 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4zgx\" (UniqueName: \"kubernetes.io/projected/34da3be3-c034-4c63-866c-57097fb5c847-kube-api-access-t4zgx\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.904642 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da3be3-c034-4c63-866c-57097fb5c847-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.904651 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:08 crc kubenswrapper[4857]: I0318 14:28:08.904660 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.060726 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6bddf5f585-25djb" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.115166 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.115776 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") pod \"34da3be3-c034-4c63-866c-57097fb5c847\" (UID: \"34da3be3-c034-4c63-866c-57097fb5c847\") " Mar 18 14:28:09 crc kubenswrapper[4857]: W0318 14:28:09.116090 4857 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/34da3be3-c034-4c63-866c-57097fb5c847/volumes/kubernetes.io~secret/combined-ca-bundle Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.118040 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.123903 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.208352 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" path="/var/lib/kubelet/pods/2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba/volumes" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.211162 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data" (OuterVolumeSpecName: "config-data") pod "34da3be3-c034-4c63-866c-57097fb5c847" (UID: "34da3be3-c034-4c63-866c-57097fb5c847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.226798 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da3be3-c034-4c63-866c-57097fb5c847-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.524845 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerStarted","Data":"e2414f6cd045b017fae7f2564a3e7e9ecb9bb97ddf88f46b8a69950bdfd20f96"} Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.525442 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34da3be3_c034_4c63_866c_57097fb5c847.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.542343 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2dbb697-87e8-4c7f-bf29-a918e84fd78e","Type":"ContainerStarted","Data":"67cfe9bf10be5a5b13f18837f3f54f093ec954f8f0e86cea4c52dd855659f5b8"} Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.556208 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da3be3-c034-4c63-866c-57097fb5c847","Type":"ContainerDied","Data":"21d2ab10abbedd2a9dc502f712c795d10ab0f89c796501ec85f1ea89d9f45c6e"} Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.556286 4857 scope.go:117] "RemoveContainer" containerID="04dae7fa0f4734da09c5f78dbaa4a07150e01d6097d4583160394fd1df2e8009" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.556479 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.566664 4857 generic.go:334] "Generic (PLEG): container finished" podID="b145d97a-a264-4d03-9908-a3957a52ceb0" containerID="c42176e1ec617d90ac802f19bc4cfc8e2a1f36d68d633c24c72832d7ce3c4c1a" exitCode=0 Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.567202 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564068-95p85" event={"ID":"b145d97a-a264-4d03-9908-a3957a52ceb0","Type":"ContainerDied","Data":"c42176e1ec617d90ac802f19bc4cfc8e2a1f36d68d633c24c72832d7ce3c4c1a"} Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.614512 4857 scope.go:117] "RemoveContainer" containerID="a49e274f85f26e83022028c2708ba9020d8e37163cbca7a1af94a7a5026e4e76" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.639982 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.648963 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.664075 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.664918 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.664949 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.664981 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="proxy-httpd" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.664991 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="proxy-httpd" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.665035 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="ceilometer-notification-agent" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665045 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="ceilometer-notification-agent" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.665061 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665069 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.665096 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665106 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.665139 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665150 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665502 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665538 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665555 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665572 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="proxy-httpd" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665589 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da3be3-c034-4c63-866c-57097fb5c847" containerName="ceilometer-notification-agent" Mar 18 14:28:09 crc kubenswrapper[4857]: E0318 14:28:09.665957 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.665980 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.666312 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.666352 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6de7cc-7f4d-48c0-bf8c-da1cc91bcdba" containerName="barbican-api-log" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.670358 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.674356 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.678722 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.699918 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840026 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840216 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840280 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnqs\" (UniqueName: \"kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840311 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840398 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840434 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.840504 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.944219 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnqs\" (UniqueName: \"kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.944864 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.945726 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.945830 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.945952 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.946120 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.946392 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.947103 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.947602 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.952643 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.954670 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.955549 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.961738 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:09 crc kubenswrapper[4857]: I0318 14:28:09.967968 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnqs\" (UniqueName: \"kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs\") pod \"ceilometer-0\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " pod="openstack/ceilometer-0" Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.034740 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.586887 4857 generic.go:334] "Generic (PLEG): container finished" podID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerID="e2414f6cd045b017fae7f2564a3e7e9ecb9bb97ddf88f46b8a69950bdfd20f96" exitCode=0 Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.586960 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerDied","Data":"e2414f6cd045b017fae7f2564a3e7e9ecb9bb97ddf88f46b8a69950bdfd20f96"} Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.592059 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2dbb697-87e8-4c7f-bf29-a918e84fd78e","Type":"ContainerStarted","Data":"b75c337a3d3dd3dd8cd9ac8b53744453d8c682ec7839196e219eb4a416f68e2b"} Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.638193 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.638163643 podStartE2EDuration="4.638163643s" podCreationTimestamp="2026-03-18 14:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:10.628861199 +0000 UTC m=+1674.757989656" watchObservedRunningTime="2026-03-18 14:28:10.638163643 +0000 UTC m=+1674.767292100" Mar 18 14:28:10 crc kubenswrapper[4857]: W0318 14:28:10.669630 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2f4faf3_64ce_4979_aff0_7eb76f7f5377.slice/crio-861871f58f004fa01987e8c39cf5ad26377723d8abce5d0fcf028a5e6c96e659 WatchSource:0}: Error finding container 861871f58f004fa01987e8c39cf5ad26377723d8abce5d0fcf028a5e6c96e659: Status 404 returned error can't find the container with id 861871f58f004fa01987e8c39cf5ad26377723d8abce5d0fcf028a5e6c96e659 Mar 18 14:28:10 crc kubenswrapper[4857]: I0318 14:28:10.670597 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.084469 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.175896 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgng\" (UniqueName: \"kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng\") pod \"b145d97a-a264-4d03-9908-a3957a52ceb0\" (UID: \"b145d97a-a264-4d03-9908-a3957a52ceb0\") " Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.204696 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34da3be3-c034-4c63-866c-57097fb5c847" path="/var/lib/kubelet/pods/34da3be3-c034-4c63-866c-57097fb5c847/volumes" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.469606 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng" (OuterVolumeSpecName: "kube-api-access-dzgng") pod "b145d97a-a264-4d03-9908-a3957a52ceb0" (UID: "b145d97a-a264-4d03-9908-a3957a52ceb0"). InnerVolumeSpecName "kube-api-access-dzgng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.485213 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgng\" (UniqueName: \"kubernetes.io/projected/b145d97a-a264-4d03-9908-a3957a52ceb0-kube-api-access-dzgng\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.605401 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564068-95p85" event={"ID":"b145d97a-a264-4d03-9908-a3957a52ceb0","Type":"ContainerDied","Data":"41d2b6226e70a968f955d102c231e9a11530298ff7f939321207dcb24732c6a0"} Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.605446 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564068-95p85" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.605487 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d2b6226e70a968f955d102c231e9a11530298ff7f939321207dcb24732c6a0" Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.606978 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerStarted","Data":"861871f58f004fa01987e8c39cf5ad26377723d8abce5d0fcf028a5e6c96e659"} Mar 18 14:28:11 crc kubenswrapper[4857]: I0318 14:28:11.901684 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.150951 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: E0318 14:28:12.151910 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b145d97a-a264-4d03-9908-a3957a52ceb0" containerName="oc" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.151928 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b145d97a-a264-4d03-9908-a3957a52ceb0" containerName="oc" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.152202 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b145d97a-a264-4d03-9908-a3957a52ceb0" containerName="oc" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.153258 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.181852 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.182060 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dm5v2" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.185481 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.203132 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.229931 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564062-jlt7p"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.241015 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564062-jlt7p"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.311247 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9gpb\" (UniqueName: \"kubernetes.io/projected/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-kube-api-access-l9gpb\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.311324 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.311355 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.311428 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.404099 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: E0318 14:28:12.405509 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-l9gpb openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="f7768364-d1c3-4f20-a5ab-ddd57887e5a2" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.414212 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9gpb\" (UniqueName: \"kubernetes.io/projected/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-kube-api-access-l9gpb\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.414300 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.414329 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.414407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.414620 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.415397 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: E0318 14:28:12.419234 4857 projected.go:194] Error preparing data for projected volume kube-api-access-l9gpb for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Mar 18 14:28:12 crc kubenswrapper[4857]: E0318 14:28:12.419351 4857 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-kube-api-access-l9gpb podName:f7768364-d1c3-4f20-a5ab-ddd57887e5a2 nodeName:}" failed. No retries permitted until 2026-03-18 14:28:12.919319106 +0000 UTC m=+1677.048447563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l9gpb" (UniqueName: "kubernetes.io/projected/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-kube-api-access-l9gpb") pod "openstackclient" (UID: "f7768364-d1c3-4f20-a5ab-ddd57887e5a2") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.427368 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.428021 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.450250 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.451978 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.460566 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.516916 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.517006 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.517271 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7d76\" (UniqueName: \"kubernetes.io/projected/40f263c7-0bb2-473d-a658-41b6104343a9-kube-api-access-d7d76\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.517317 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.621518 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7d76\" (UniqueName: \"kubernetes.io/projected/40f263c7-0bb2-473d-a658-41b6104343a9-kube-api-access-d7d76\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.621592 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.621835 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.621899 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.625081 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.626308 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-openstack-config-secret\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.628199 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40f263c7-0bb2-473d-a658-41b6104343a9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.628994 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerStarted","Data":"95e011dcffddac45f79d0d48c979d4323bd5bf74a0129ddd6a9fd13bca1368c9"} Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.668878 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerStarted","Data":"de936398dfad06d25c2900a725d41a3fe1236f429a4963f99fd02fd2821adfac"} Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.669015 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.684011 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7d76\" (UniqueName: \"kubernetes.io/projected/40f263c7-0bb2-473d-a658-41b6104343a9-kube-api-access-d7d76\") pod \"openstackclient\" (UID: \"40f263c7-0bb2-473d-a658-41b6104343a9\") " pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.703943 4857 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f7768364-d1c3-4f20-a5ab-ddd57887e5a2" podUID="40f263c7-0bb2-473d-a658-41b6104343a9" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.716227 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qf8qd" podStartSLOduration=4.993017314 podStartE2EDuration="8.716192597s" podCreationTimestamp="2026-03-18 14:28:04 +0000 UTC" firstStartedPulling="2026-03-18 14:28:07.439353686 +0000 UTC m=+1671.568482143" lastFinishedPulling="2026-03-18 14:28:11.162528969 +0000 UTC m=+1675.291657426" observedRunningTime="2026-03-18 14:28:12.69287622 +0000 UTC m=+1676.822004677" watchObservedRunningTime="2026-03-18 14:28:12.716192597 +0000 UTC m=+1676.845321054" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.822289 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.918358 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.931535 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config\") pod \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.931693 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret\") pod \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.931799 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle\") pod \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\" (UID: \"f7768364-d1c3-4f20-a5ab-ddd57887e5a2\") " Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.932010 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f7768364-d1c3-4f20-a5ab-ddd57887e5a2" (UID: "f7768364-d1c3-4f20-a5ab-ddd57887e5a2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.932567 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9gpb\" (UniqueName: \"kubernetes.io/projected/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-kube-api-access-l9gpb\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.932607 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.940994 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f7768364-d1c3-4f20-a5ab-ddd57887e5a2" (UID: "f7768364-d1c3-4f20-a5ab-ddd57887e5a2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:12 crc kubenswrapper[4857]: I0318 14:28:12.941031 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7768364-d1c3-4f20-a5ab-ddd57887e5a2" (UID: "f7768364-d1c3-4f20-a5ab-ddd57887e5a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.034499 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.034536 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7768364-d1c3-4f20-a5ab-ddd57887e5a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.180804 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5427968-6f77-45c6-9401-fec9f5409905" path="/var/lib/kubelet/pods/a5427968-6f77-45c6-9401-fec9f5409905/volumes" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.181903 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7768364-d1c3-4f20-a5ab-ddd57887e5a2" path="/var/lib/kubelet/pods/f7768364-d1c3-4f20-a5ab-ddd57887e5a2/volumes" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.616271 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 14:28:13 crc kubenswrapper[4857]: W0318 14:28:13.621828 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40f263c7_0bb2_473d_a658_41b6104343a9.slice/crio-12fe8bc8fcd4bda21d1566c5509dcd881aaf7a2db275f5451ce22fa6ab756cec WatchSource:0}: Error finding container 12fe8bc8fcd4bda21d1566c5509dcd881aaf7a2db275f5451ce22fa6ab756cec: Status 404 returned error can't find the container with id 12fe8bc8fcd4bda21d1566c5509dcd881aaf7a2db275f5451ce22fa6ab756cec Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.686851 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"40f263c7-0bb2-473d-a658-41b6104343a9","Type":"ContainerStarted","Data":"12fe8bc8fcd4bda21d1566c5509dcd881aaf7a2db275f5451ce22fa6ab756cec"} Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.696377 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerStarted","Data":"1db29361e9749fadfc0b932964ddce8d3e87453c9057773e06b7661b8e13fbe3"} Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.709770 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerStarted","Data":"754ebb9d90b9e30fc81c51c3b2180d2b62ef96328de01fb62055e4efb7189bcc"} Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.709833 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"117d706b-860f-4f17-8f2b-5d27b7cdfe61","Type":"ContainerStarted","Data":"1a534cdce5afeb3da1954b0b228a1c6f95f3b22e12862c8e4fc270e5f9e8913a"} Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.710165 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.756706 4857 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="f7768364-d1c3-4f20-a5ab-ddd57887e5a2" podUID="40f263c7-0bb2-473d-a658-41b6104343a9" Mar 18 14:28:13 crc kubenswrapper[4857]: I0318 14:28:13.762170 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=132.762145737 podStartE2EDuration="2m12.762145737s" podCreationTimestamp="2026-03-18 14:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:13.74596979 +0000 UTC m=+1677.875098247" watchObservedRunningTime="2026-03-18 14:28:13.762145737 +0000 UTC m=+1677.891274194" Mar 18 14:28:14 crc kubenswrapper[4857]: I0318 14:28:14.727703 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerStarted","Data":"f64321db60225e30892564f68ebd7e8290f3adbd0654128e0b032d54359cd1c2"} Mar 18 14:28:15 crc kubenswrapper[4857]: I0318 14:28:15.012375 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:15 crc kubenswrapper[4857]: I0318 14:28:15.012743 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:16 crc kubenswrapper[4857]: I0318 14:28:16.092766 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qf8qd" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" probeResult="failure" output=< Mar 18 14:28:16 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:28:16 crc kubenswrapper[4857]: > Mar 18 14:28:16 crc kubenswrapper[4857]: I0318 14:28:16.765403 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 18 14:28:16 crc kubenswrapper[4857]: I0318 14:28:16.765467 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 18 14:28:16 crc kubenswrapper[4857]: I0318 14:28:16.799779 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 18 14:28:17 crc kubenswrapper[4857]: I0318 14:28:17.181368 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:28:17 crc kubenswrapper[4857]: E0318 14:28:17.181696 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:17 crc kubenswrapper[4857]: I0318 14:28:17.258213 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 18 14:28:17 crc kubenswrapper[4857]: I0318 14:28:17.809546 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.516550 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.518995 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.878779 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bd6fd9d7b-xrcmg"] Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.880986 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.898896 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bd6fd9d7b-xrcmg"] Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977466 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-public-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977554 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-scripts\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977632 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-combined-ca-bundle\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977793 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbnmt\" (UniqueName: \"kubernetes.io/projected/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-kube-api-access-tbnmt\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977872 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-internal-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977918 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-logs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:19 crc kubenswrapper[4857]: I0318 14:28:19.977962 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-config-data\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081310 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbnmt\" (UniqueName: \"kubernetes.io/projected/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-kube-api-access-tbnmt\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081441 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-internal-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081478 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-logs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081527 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-config-data\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081617 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-public-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081652 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-scripts\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.081694 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-combined-ca-bundle\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.082004 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-logs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.090178 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-public-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.092331 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-combined-ca-bundle\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.093249 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-internal-tls-certs\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.093682 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-config-data\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.095394 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-scripts\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.132302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbnmt\" (UniqueName: \"kubernetes.io/projected/744c80f0-c04e-48e5-a6ae-8fe7ae2f5775-kube-api-access-tbnmt\") pod \"placement-5bd6fd9d7b-xrcmg\" (UID: \"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775\") " pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:20 crc kubenswrapper[4857]: I0318 14:28:20.216072 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:22 crc kubenswrapper[4857]: I0318 14:28:22.119407 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bf8cc5fd5-pf2nl" Mar 18 14:28:22 crc kubenswrapper[4857]: I0318 14:28:22.241739 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:28:22 crc kubenswrapper[4857]: I0318 14:28:22.242404 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-api" containerID="cri-o://a386f81110d76903657fc5acab50ac31f955b88c2f0d544459a4f748338fe6b4" gracePeriod=30 Mar 18 14:28:22 crc kubenswrapper[4857]: I0318 14:28:22.242542 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6787dc4b5d-t6ns5" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" containerID="cri-o://431d75572bf41dee46bbb2c87bec9ef742c99daa57fb6ec41e1c2a63cbf78c63" gracePeriod=30 Mar 18 14:28:22 crc kubenswrapper[4857]: E0318 14:28:22.591297 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c0d12e_fe57_42ab_bb2c_3f7c8a4af061.slice/crio-b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c0d12e_fe57_42ab_bb2c_3f7c8a4af061.slice/crio-conmon-b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.030827 4857 generic.go:334] "Generic (PLEG): container finished" podID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerID="431d75572bf41dee46bbb2c87bec9ef742c99daa57fb6ec41e1c2a63cbf78c63" exitCode=0 Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.030947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerDied","Data":"431d75572bf41dee46bbb2c87bec9ef742c99daa57fb6ec41e1c2a63cbf78c63"} Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.033825 4857 generic.go:334] "Generic (PLEG): container finished" podID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerID="b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a" exitCode=137 Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.033905 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerDied","Data":"b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a"} Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.413718 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-975859b47-gfk64"] Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.419278 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.424804 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.425011 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.426007 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.454775 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-975859b47-gfk64"] Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507161 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-config-data\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-run-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507257 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-log-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507281 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-internal-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-public-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507374 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-combined-ca-bundle\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507408 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-etc-swift\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.507429 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qjz\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-kube-api-access-j2qjz\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610194 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-public-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610245 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-combined-ca-bundle\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610280 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-etc-swift\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610308 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qjz\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-kube-api-access-j2qjz\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610490 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-config-data\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610518 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-run-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610540 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-log-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.610564 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-internal-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.611151 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-run-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.611235 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559e9866-068c-4602-879b-6291b10302c1-log-httpd\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.617922 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-public-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.618200 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-combined-ca-bundle\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.619058 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-etc-swift\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.621238 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-config-data\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.650696 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qjz\" (UniqueName: \"kubernetes.io/projected/559e9866-068c-4602-879b-6291b10302c1-kube-api-access-j2qjz\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.651494 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/559e9866-068c-4602-879b-6291b10302c1-internal-tls-certs\") pod \"swift-proxy-975859b47-gfk64\" (UID: \"559e9866-068c-4602-879b-6291b10302c1\") " pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:23 crc kubenswrapper[4857]: I0318 14:28:23.743469 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:25 crc kubenswrapper[4857]: I0318 14:28:25.214792 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:28:25 crc kubenswrapper[4857]: I0318 14:28:25.391402 4857 generic.go:334] "Generic (PLEG): container finished" podID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerID="e9f5f32a7cff1e1d677cd5eec47c2d2667cba16c41b31530969d13a4ecaaced9" exitCode=137 Mar 18 14:28:25 crc kubenswrapper[4857]: I0318 14:28:25.391457 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerDied","Data":"e9f5f32a7cff1e1d677cd5eec47c2d2667cba16c41b31530969d13a4ecaaced9"} Mar 18 14:28:25 crc kubenswrapper[4857]: I0318 14:28:25.791103 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.212:8776/healthcheck\": dial tcp 10.217.0.212:8776: connect: connection refused" Mar 18 14:28:26 crc kubenswrapper[4857]: I0318 14:28:26.306915 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qf8qd" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" probeResult="failure" output=< Mar 18 14:28:26 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:28:26 crc kubenswrapper[4857]: > Mar 18 14:28:27 crc kubenswrapper[4857]: I0318 14:28:27.420181 4857 generic.go:334] "Generic (PLEG): container finished" podID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerID="a386f81110d76903657fc5acab50ac31f955b88c2f0d544459a4f748338fe6b4" exitCode=0 Mar 18 14:28:27 crc kubenswrapper[4857]: I0318 14:28:27.420445 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerDied","Data":"a386f81110d76903657fc5acab50ac31f955b88c2f0d544459a4f748338fe6b4"} Mar 18 14:28:30 crc kubenswrapper[4857]: I0318 14:28:30.163571 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:28:30 crc kubenswrapper[4857]: E0318 14:28:30.164410 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:30 crc kubenswrapper[4857]: I0318 14:28:30.946939 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.212:8776/healthcheck\": dial tcp 10.217.0.212:8776: connect: connection refused" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.499041 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.501192 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.503713 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.504567 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.504882 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-h4jlb" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.536169 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.536232 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.536290 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4q5f\" (UniqueName: \"kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.536398 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.540706 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.638315 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.638513 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.638541 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.638567 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4q5f\" (UniqueName: \"kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.640059 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.648978 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.650679 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.673015 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.680393 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.685608 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.712381 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.714803 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.722542 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4q5f\" (UniqueName: \"kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f\") pod \"heat-engine-8dbd8fb56-f2qm7\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.749880 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756151 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqclj\" (UniqueName: \"kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756293 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756383 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756436 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756672 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756858 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcrs\" (UniqueName: \"kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.756909 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.757015 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.779836 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.803806 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.805922 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.809295 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.834456 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.858619 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.858990 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859135 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859239 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859355 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcrs\" (UniqueName: \"kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859476 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859610 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859697 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnbk\" (UniqueName: \"kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.859990 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqclj\" (UniqueName: \"kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.860108 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.861272 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.861322 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.861349 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.861371 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.862433 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.862594 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.863181 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.865335 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.870079 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.870761 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:33 crc kubenswrapper[4857]: I0318 14:28:33.874347 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.256198 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.256327 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.256348 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.256449 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcnbk\" (UniqueName: \"kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.258126 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.260656 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.267735 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.272298 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.275134 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqclj\" (UniqueName: \"kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj\") pod \"dnsmasq-dns-7756b9d78c-62glg\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.275361 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.304359 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcrs\" (UniqueName: \"kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs\") pod \"heat-cfnapi-7795f8799-xsk4z\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.305809 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcnbk\" (UniqueName: \"kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk\") pod \"heat-api-99884ddc-qwc56\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.359559 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.435571 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:34 crc kubenswrapper[4857]: I0318 14:28:34.454709 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.077639 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.490383 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.549897 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.576223 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.644248 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.644553 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.644593 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vj68\" (UniqueName: \"kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.670098 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.752181 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.752420 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.752443 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vj68\" (UniqueName: \"kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.753525 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.754034 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.775550 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vj68\" (UniqueName: \"kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68\") pod \"community-operators-b7d67\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.794460 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.212:8776/healthcheck\": dial tcp 10.217.0.212:8776: connect: connection refused" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.794623 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 18 14:28:35 crc kubenswrapper[4857]: I0318 14:28:35.913670 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:36 crc kubenswrapper[4857]: E0318 14:28:36.007954 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Mar 18 14:28:36 crc kubenswrapper[4857]: E0318 14:28:36.008196 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n96h567hd4h5d5h695h589h56fhbbh59dhbfh6ch594h58fh5dh566h688h544h5d5h5b9h644h646h5c4h79hf9h57h67bh9fh645h65dh57dh64fh579q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7d76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(40f263c7-0bb2-473d-a658-41b6104343a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:28:36 crc kubenswrapper[4857]: E0318 14:28:36.010397 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="40f263c7-0bb2-473d-a658-41b6104343a9" Mar 18 14:28:36 crc kubenswrapper[4857]: E0318 14:28:36.886293 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="40f263c7-0bb2-473d-a658-41b6104343a9" Mar 18 14:28:36 crc kubenswrapper[4857]: I0318 14:28:36.952891 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:36 crc kubenswrapper[4857]: I0318 14:28:36.953564 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qf8qd" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" containerID="cri-o://95e011dcffddac45f79d0d48c979d4323bd5bf74a0129ddd6a9fd13bca1368c9" gracePeriod=2 Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.435294 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.530917 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle\") pod \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.531293 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dp5\" (UniqueName: \"kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5\") pod \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.531345 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs\") pod \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.531411 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data\") pod \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.531484 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom\") pod \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\" (UID: \"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061\") " Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.541823 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs" (OuterVolumeSpecName: "logs") pod "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" (UID: "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.547520 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" (UID: "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.553413 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5" (OuterVolumeSpecName: "kube-api-access-q4dp5") pod "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" (UID: "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061"). InnerVolumeSpecName "kube-api-access-q4dp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.625853 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" (UID: "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.643931 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.643973 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4dp5\" (UniqueName: \"kubernetes.io/projected/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-kube-api-access-q4dp5\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.643985 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.643995 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.661197 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data" (OuterVolumeSpecName: "config-data") pod "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" (UID: "d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.746532 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.898598 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3cebbcbb-5c85-4489-b408-6e31e38ccff2","Type":"ContainerDied","Data":"c5833a82245be25182e8f7aac3bf49b953bd4ccb487ecc307492959b006a2a3e"} Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.898653 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5833a82245be25182e8f7aac3bf49b953bd4ccb487ecc307492959b006a2a3e" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.909595 4857 generic.go:334] "Generic (PLEG): container finished" podID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerID="95e011dcffddac45f79d0d48c979d4323bd5bf74a0129ddd6a9fd13bca1368c9" exitCode=0 Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.909663 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerDied","Data":"95e011dcffddac45f79d0d48c979d4323bd5bf74a0129ddd6a9fd13bca1368c9"} Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.914111 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" event={"ID":"d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061","Type":"ContainerDied","Data":"b58173bb4208b46badedfe288366a99f599dca3fb4537a696febb83d69b8f080"} Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.914154 4857 scope.go:117] "RemoveContainer" containerID="b8ec12bfa771c188645747182a8f2eeeebca8cc3fca78e345a1e69f8df0ddc5a" Mar 18 14:28:37 crc kubenswrapper[4857]: I0318 14:28:37.914322 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-89f9cddcb-2jcgs" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.016047 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.054600 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.054727 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.054725 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.054847 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.054874 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.055019 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.055078 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.055095 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7b6m\" (UniqueName: \"kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m\") pod \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\" (UID: \"3cebbcbb-5c85-4489-b408-6e31e38ccff2\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.055655 4857 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3cebbcbb-5c85-4489-b408-6e31e38ccff2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.056951 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs" (OuterVolumeSpecName: "logs") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.064969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.080173 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.091771 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts" (OuterVolumeSpecName: "scripts") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.096073 4857 scope.go:117] "RemoveContainer" containerID="838e756aaf3234086b02230c39627295d1241dbf473384e99f7ca2cc5570a614" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.099058 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m" (OuterVolumeSpecName: "kube-api-access-b7b6m") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "kube-api-access-b7b6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.149359 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.158298 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config\") pod \"daf5f3ee-ad7f-4009-affb-21abb788b370\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.158369 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config\") pod \"daf5f3ee-ad7f-4009-affb-21abb788b370\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.158458 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle\") pod \"daf5f3ee-ad7f-4009-affb-21abb788b370\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.158787 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs\") pod \"daf5f3ee-ad7f-4009-affb-21abb788b370\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.158809 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zdv7\" (UniqueName: \"kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7\") pod \"daf5f3ee-ad7f-4009-affb-21abb788b370\" (UID: \"daf5f3ee-ad7f-4009-affb-21abb788b370\") " Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.159543 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.159558 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7b6m\" (UniqueName: \"kubernetes.io/projected/3cebbcbb-5c85-4489-b408-6e31e38ccff2-kube-api-access-b7b6m\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.159580 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.159588 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.159597 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cebbcbb-5c85-4489-b408-6e31e38ccff2-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.182249 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "daf5f3ee-ad7f-4009-affb-21abb788b370" (UID: "daf5f3ee-ad7f-4009-affb-21abb788b370"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.185063 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data" (OuterVolumeSpecName: "config-data") pod "3cebbcbb-5c85-4489-b408-6e31e38ccff2" (UID: "3cebbcbb-5c85-4489-b408-6e31e38ccff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.185069 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7" (OuterVolumeSpecName: "kube-api-access-9zdv7") pod "daf5f3ee-ad7f-4009-affb-21abb788b370" (UID: "daf5f3ee-ad7f-4009-affb-21abb788b370"). InnerVolumeSpecName "kube-api-access-9zdv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:38 crc kubenswrapper[4857]: I0318 14:28:38.206126 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.189631 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cebbcbb-5c85-4489-b408-6e31e38ccff2-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.189655 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zdv7\" (UniqueName: \"kubernetes.io/projected/daf5f3ee-ad7f-4009-affb-21abb788b370-kube-api-access-9zdv7\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.189664 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.364567 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daf5f3ee-ad7f-4009-affb-21abb788b370" (UID: "daf5f3ee-ad7f-4009-affb-21abb788b370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.397022 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config" (OuterVolumeSpecName: "config") pod "daf5f3ee-ad7f-4009-affb-21abb788b370" (UID: "daf5f3ee-ad7f-4009-affb-21abb788b370"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.447050 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6787dc4b5d-t6ns5" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.468630 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-89f9cddcb-2jcgs"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.487594 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6787dc4b5d-t6ns5" event={"ID":"daf5f3ee-ad7f-4009-affb-21abb788b370","Type":"ContainerDied","Data":"3d72e292b8f9b49a5cd6d7cdf0cb870aa7b99cf5ede73b8c3087b2fecff088ed"} Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.487693 4857 scope.go:117] "RemoveContainer" containerID="431d75572bf41dee46bbb2c87bec9ef742c99daa57fb6ec41e1c2a63cbf78c63" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.478926 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.521366 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.521406 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.554100 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "daf5f3ee-ad7f-4009-affb-21abb788b370" (UID: "daf5f3ee-ad7f-4009-affb-21abb788b370"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.626086 4857 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/daf5f3ee-ad7f-4009-affb-21abb788b370-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.634577 4857 scope.go:117] "RemoveContainer" containerID="a386f81110d76903657fc5acab50ac31f955b88c2f0d544459a4f748338fe6b4" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.702209 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.759411 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:28:39 crc kubenswrapper[4857]: W0318 14:28:39.795248 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe9c8d8_6bd9_4958_b511_ddd797244400.slice/crio-ca70e3f818bd9114875e1fe541eabc4a2c1b94e73f38e2f54625874533325a46 WatchSource:0}: Error finding container ca70e3f818bd9114875e1fe541eabc4a2c1b94e73f38e2f54625874533325a46: Status 404 returned error can't find the container with id ca70e3f818bd9114875e1fe541eabc4a2c1b94e73f38e2f54625874533325a46 Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.799001 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815062 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815843 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815861 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener" Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815877 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-api" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815883 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-api" Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815895 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api-log" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815901 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api-log" Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815926 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener-log" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815931 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener-log" Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815945 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815950 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" Mar 18 14:28:39 crc kubenswrapper[4857]: E0318 14:28:39.815968 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.815974 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816209 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816221 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" containerName="cinder-api-log" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816232 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener-log" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816247 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-httpd" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816256 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" containerName="barbican-keystone-listener" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.816265 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" containerName="neutron-api" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.835229 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.841653 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.843272 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.843851 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.846137 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.856546 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.897461 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.947635 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.954588 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.955533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38f691fd-1071-4bdd-a29a-e0b7ae81432e-logs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.955608 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38f691fd-1071-4bdd-a29a-e0b7ae81432e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.955692 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98s9m\" (UniqueName: \"kubernetes.io/projected/38f691fd-1071-4bdd-a29a-e0b7ae81432e-kube-api-access-98s9m\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.955864 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.956133 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data-custom\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.956342 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-scripts\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.956431 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.956481 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.966660 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:28:39 crc kubenswrapper[4857]: I0318 14:28:39.987349 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6787dc4b5d-t6ns5"] Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.022699 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bd6fd9d7b-xrcmg"] Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059018 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38f691fd-1071-4bdd-a29a-e0b7ae81432e-logs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059072 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38f691fd-1071-4bdd-a29a-e0b7ae81432e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98s9m\" (UniqueName: \"kubernetes.io/projected/38f691fd-1071-4bdd-a29a-e0b7ae81432e-kube-api-access-98s9m\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059230 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38f691fd-1071-4bdd-a29a-e0b7ae81432e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059543 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38f691fd-1071-4bdd-a29a-e0b7ae81432e-logs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.059861 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.060051 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data-custom\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.060260 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-scripts\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.060325 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.060353 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.060416 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.071130 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.073264 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.073943 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data-custom\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.081246 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-config-data\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.098956 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-scripts\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.110546 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38f691fd-1071-4bdd-a29a-e0b7ae81432e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.118616 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98s9m\" (UniqueName: \"kubernetes.io/projected/38f691fd-1071-4bdd-a29a-e0b7ae81432e-kube-api-access-98s9m\") pod \"cinder-api-0\" (UID: \"38f691fd-1071-4bdd-a29a-e0b7ae81432e\") " pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.127505 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.193853 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-975859b47-gfk64"] Mar 18 14:28:40 crc kubenswrapper[4857]: W0318 14:28:40.203629 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod559e9866_068c_4602_879b_6291b10302c1.slice/crio-7636a67c28a80c3ea2d4cd0cce53438f4c3f0e4f8fc623f9f5a48f14d783258a WatchSource:0}: Error finding container 7636a67c28a80c3ea2d4cd0cce53438f4c3f0e4f8fc623f9f5a48f14d783258a: Status 404 returned error can't find the container with id 7636a67c28a80c3ea2d4cd0cce53438f4c3f0e4f8fc623f9f5a48f14d783258a Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.207407 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.264300 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzfqp\" (UniqueName: \"kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp\") pod \"5a50061c-db27-46f4-ad90-a0d2ced91127\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.264427 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities\") pod \"5a50061c-db27-46f4-ad90-a0d2ced91127\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.264938 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content\") pod \"5a50061c-db27-46f4-ad90-a0d2ced91127\" (UID: \"5a50061c-db27-46f4-ad90-a0d2ced91127\") " Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.269424 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities" (OuterVolumeSpecName: "utilities") pod "5a50061c-db27-46f4-ad90-a0d2ced91127" (UID: "5a50061c-db27-46f4-ad90-a0d2ced91127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.300585 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp" (OuterVolumeSpecName: "kube-api-access-bzfqp") pod "5a50061c-db27-46f4-ad90-a0d2ced91127" (UID: "5a50061c-db27-46f4-ad90-a0d2ced91127"). InnerVolumeSpecName "kube-api-access-bzfqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.318507 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.336914 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a50061c-db27-46f4-ad90-a0d2ced91127" (UID: "5a50061c-db27-46f4-ad90-a0d2ced91127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.651422 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzfqp\" (UniqueName: \"kubernetes.io/projected/5a50061c-db27-46f4-ad90-a0d2ced91127-kube-api-access-bzfqp\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.651467 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.651478 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a50061c-db27-46f4-ad90-a0d2ced91127-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.680458 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7795f8799-xsk4z" event={"ID":"afe9c8d8-6bd9-4958-b511-ddd797244400","Type":"ContainerStarted","Data":"ca70e3f818bd9114875e1fe541eabc4a2c1b94e73f38e2f54625874533325a46"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.685351 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8dbd8fb56-f2qm7" event={"ID":"0e7d3d4f-6574-4453-9838-6433716eb9ba","Type":"ContainerStarted","Data":"05ec60997e13ba5817cb3f1f7f2cdfbaca7da6c4fef303ceed3a03aaa32ee6a4"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.689181 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-99884ddc-qwc56" event={"ID":"85d19c96-01eb-49d5-8240-825a53ed459d","Type":"ContainerStarted","Data":"d38df2c985d409ae130538fdfe3feac05deb4c99ad3273b61c244f6ac2f558d5"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.694159 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qf8qd" event={"ID":"5a50061c-db27-46f4-ad90-a0d2ced91127","Type":"ContainerDied","Data":"6e4f466090d396b644d4a35f05ad69165f0adfa6fbf5a0ba58ff68ad37562924"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.694205 4857 scope.go:117] "RemoveContainer" containerID="95e011dcffddac45f79d0d48c979d4323bd5bf74a0129ddd6a9fd13bca1368c9" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.694338 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qf8qd" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.701091 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-975859b47-gfk64" event={"ID":"559e9866-068c-4602-879b-6291b10302c1","Type":"ContainerStarted","Data":"7636a67c28a80c3ea2d4cd0cce53438f4c3f0e4f8fc623f9f5a48f14d783258a"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.705793 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd6fd9d7b-xrcmg" event={"ID":"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775","Type":"ContainerStarted","Data":"3e9b9cf746ce4dcb5d17824e2db5598d974c42cf6384ebb571225428fb25fc16"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.726938 4857 generic.go:334] "Generic (PLEG): container finished" podID="0626adfc-bb2b-4796-bd56-551264758fd6" containerID="88e55f5c97ff5f99b00940de098bce5d35e573430aa7256f4339ec1ad0b4dc3b" exitCode=0 Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.727107 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" event={"ID":"0626adfc-bb2b-4796-bd56-551264758fd6","Type":"ContainerDied","Data":"88e55f5c97ff5f99b00940de098bce5d35e573430aa7256f4339ec1ad0b4dc3b"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.727151 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" event={"ID":"0626adfc-bb2b-4796-bd56-551264758fd6","Type":"ContainerStarted","Data":"fffb8cf206291c4279602b5ab13d0593d8bc344318dd234c41351bc2c22a3421"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.737746 4857 generic.go:334] "Generic (PLEG): container finished" podID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerID="0682e5a05e1ca3d0f46a367825606938d9c73f4bc531e1e63c9ce86d2cfd9bc2" exitCode=0 Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.737867 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerDied","Data":"0682e5a05e1ca3d0f46a367825606938d9c73f4bc531e1e63c9ce86d2cfd9bc2"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.737915 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerStarted","Data":"bdcdc30d20cb25670ef308e8baaf25d66fcc338726a0df22f1a94041b04dfc7b"} Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.906497 4857 scope.go:117] "RemoveContainer" containerID="e2414f6cd045b017fae7f2564a3e7e9ecb9bb97ddf88f46b8a69950bdfd20f96" Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.919860 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.932704 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qf8qd"] Mar 18 14:28:40 crc kubenswrapper[4857]: I0318 14:28:40.960656 4857 scope.go:117] "RemoveContainer" containerID="d7e68e146169b01fa85ba9070b9bae18c38395c8b1aef2d271cd685721718a09" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.139060 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 18 14:28:41 crc kubenswrapper[4857]: W0318 14:28:41.158939 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38f691fd_1071_4bdd_a29a_e0b7ae81432e.slice/crio-edbcffb5c13845828874eb6979695c35494b70f442c4c7832aca03deb28a282e WatchSource:0}: Error finding container edbcffb5c13845828874eb6979695c35494b70f442c4c7832aca03deb28a282e: Status 404 returned error can't find the container with id edbcffb5c13845828874eb6979695c35494b70f442c4c7832aca03deb28a282e Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.171572 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:28:41 crc kubenswrapper[4857]: E0318 14:28:41.171934 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.224790 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cebbcbb-5c85-4489-b408-6e31e38ccff2" path="/var/lib/kubelet/pods/3cebbcbb-5c85-4489-b408-6e31e38ccff2/volumes" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.225636 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" path="/var/lib/kubelet/pods/5a50061c-db27-46f4-ad90-a0d2ced91127/volumes" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.226483 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061" path="/var/lib/kubelet/pods/d4c0d12e-fe57-42ab-bb2c-3f7c8a4af061/volumes" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.228810 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf5f3ee-ad7f-4009-affb-21abb788b370" path="/var/lib/kubelet/pods/daf5f3ee-ad7f-4009-affb-21abb788b370/volumes" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.805526 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd6fd9d7b-xrcmg" event={"ID":"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775","Type":"ContainerStarted","Data":"3e1d493a064fb84ca13d0aa5e5da63fa849aa612b97e5817a82a1d21186e08da"} Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.806586 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bd6fd9d7b-xrcmg" event={"ID":"744c80f0-c04e-48e5-a6ae-8fe7ae2f5775","Type":"ContainerStarted","Data":"d889bca58be6e9a7a9999ffbd275e03bae55c4f16a6109c2f7175f8480aa3017"} Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.808819 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.808901 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.818714 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"38f691fd-1071-4bdd-a29a-e0b7ae81432e","Type":"ContainerStarted","Data":"edbcffb5c13845828874eb6979695c35494b70f442c4c7832aca03deb28a282e"} Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.863995 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8dbd8fb56-f2qm7" event={"ID":"0e7d3d4f-6574-4453-9838-6433716eb9ba","Type":"ContainerStarted","Data":"78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e"} Mar 18 14:28:41 crc kubenswrapper[4857]: I0318 14:28:41.866407 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.151579 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" event={"ID":"0626adfc-bb2b-4796-bd56-551264758fd6","Type":"ContainerStarted","Data":"3eb1f6e0247b7e807c9dc1285e267168f64961b20b7feb2592d6cf7343c30ade"} Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.180071 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.142993 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bd6fd9d7b-xrcmg" podStartSLOduration=24.142965606 podStartE2EDuration="24.142965606s" podCreationTimestamp="2026-03-18 14:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:41.844678335 +0000 UTC m=+1705.973806792" watchObservedRunningTime="2026-03-18 14:28:43.142965606 +0000 UTC m=+1707.272094063" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.279404 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8dbd8fb56-f2qm7" podStartSLOduration=10.279376219 podStartE2EDuration="10.279376219s" podCreationTimestamp="2026-03-18 14:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:43.106498238 +0000 UTC m=+1707.235626695" watchObservedRunningTime="2026-03-18 14:28:43.279376219 +0000 UTC m=+1707.408504676" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.321092 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" podStartSLOduration=10.321071618 podStartE2EDuration="10.321071618s" podCreationTimestamp="2026-03-18 14:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:43.2965289 +0000 UTC m=+1707.425657357" watchObservedRunningTime="2026-03-18 14:28:43.321071618 +0000 UTC m=+1707.450200075" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.362181 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-975859b47-gfk64" event={"ID":"559e9866-068c-4602-879b-6291b10302c1","Type":"ContainerStarted","Data":"a834e7283f9c7641d1a84918402d27e187c94eb9daed8cec88ae488f1c56340d"} Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.362498 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-975859b47-gfk64" event={"ID":"559e9866-068c-4602-879b-6291b10302c1","Type":"ContainerStarted","Data":"c9612d84312f3a20b4aa4a5ff3377b6166840fdeb8124600fe01fd433a7889c8"} Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.364195 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.364476 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.404611 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-975859b47-gfk64" podStartSLOduration=20.40459005 podStartE2EDuration="20.40459005s" podCreationTimestamp="2026-03-18 14:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:43.400109937 +0000 UTC m=+1707.529238394" watchObservedRunningTime="2026-03-18 14:28:43.40459005 +0000 UTC m=+1707.533718497" Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.601965 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.602277 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-log" containerID="cri-o://bb2ab01878d4c92536a05e4d4e4a0e5dd770a5abc36c462aa7656ddd28f9558b" gracePeriod=30 Mar 18 14:28:43 crc kubenswrapper[4857]: I0318 14:28:43.602472 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-httpd" containerID="cri-o://fd3f3e4c72979e1311976bba0362715519392668a8948f3eeb9614f335b3f82c" gracePeriod=30 Mar 18 14:28:44 crc kubenswrapper[4857]: I0318 14:28:44.383266 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerStarted","Data":"8f6cce8905926ddebdca984fc7f18b005d716f8ba879848bc6d0080ec86bd7d0"} Mar 18 14:28:44 crc kubenswrapper[4857]: I0318 14:28:44.385639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"38f691fd-1071-4bdd-a29a-e0b7ae81432e","Type":"ContainerStarted","Data":"96eaaf72c9a94af671036c90f4ea29875f786d77ab2602a06e27e0b8e03b62d0"} Mar 18 14:28:44 crc kubenswrapper[4857]: I0318 14:28:44.391647 4857 generic.go:334] "Generic (PLEG): container finished" podID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerID="bb2ab01878d4c92536a05e4d4e4a0e5dd770a5abc36c462aa7656ddd28f9558b" exitCode=143 Mar 18 14:28:44 crc kubenswrapper[4857]: I0318 14:28:44.391865 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerDied","Data":"bb2ab01878d4c92536a05e4d4e4a0e5dd770a5abc36c462aa7656ddd28f9558b"} Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.424507 4857 generic.go:334] "Generic (PLEG): container finished" podID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerID="8f6cce8905926ddebdca984fc7f18b005d716f8ba879848bc6d0080ec86bd7d0" exitCode=0 Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.424920 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerDied","Data":"8f6cce8905926ddebdca984fc7f18b005d716f8ba879848bc6d0080ec86bd7d0"} Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.497358 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:28:45 crc kubenswrapper[4857]: E0318 14:28:45.498142 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="extract-utilities" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.498161 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="extract-utilities" Mar 18 14:28:45 crc kubenswrapper[4857]: E0318 14:28:45.498188 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.498195 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" Mar 18 14:28:45 crc kubenswrapper[4857]: E0318 14:28:45.498231 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="extract-content" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.498240 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="extract-content" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.498561 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a50061c-db27-46f4-ad90-a0d2ced91127" containerName="registry-server" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.499605 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.528832 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.530685 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.578735 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.617093 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.617164 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.617208 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-274fp\" (UniqueName: \"kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:45 crc kubenswrapper[4857]: I0318 14:28:45.617273 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:46 crc kubenswrapper[4857]: I0318 14:28:46.987832 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.064120 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.064529 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.064636 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.064691 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-274fp\" (UniqueName: \"kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.065586 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.065642 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.065930 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r9hh\" (UniqueName: \"kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.065981 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.120703 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.123604 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.152458 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.153286 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.176720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.210619 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-274fp\" (UniqueName: \"kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp\") pod \"heat-engine-5dc49b6cff-qkjws\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.216409 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.216694 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.217070 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.217460 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r9hh\" (UniqueName: \"kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.222725 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:45612->10.217.0.197:9292: read: connection reset by peer" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.223314 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.197:9292/healthcheck\": read tcp 10.217.0.2:45596->10.217.0.197:9292: read: connection reset by peer" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.239935 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.245950 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.251489 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r9hh\" (UniqueName: \"kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.303572 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data\") pod \"heat-api-5b9fd4bc48-dqzj8\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.324994 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.330618 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6ms\" (UniqueName: \"kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.331200 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.336832 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.332160 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.338324 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.338668 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-log" containerID="cri-o://8e0188370a74f1293b2acaf67ab6ee5b6ac3f308cf28a620efb702c7c15d44d9" gracePeriod=30 Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.338970 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-httpd" containerID="cri-o://b3fd04628ee0ea8357f8a3e4f567c32992da7103a255703b648fb7b0780cd02f" gracePeriod=30 Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.353347 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.402468 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.441862 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.441949 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6ms\" (UniqueName: \"kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.442055 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:47 crc kubenswrapper[4857]: I0318 14:28:47.442252 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.119053 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.177121 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.178222 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.180551 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6ms\" (UniqueName: \"kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms\") pod \"heat-cfnapi-7464859c55-9455r\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.260704 4857 generic.go:334] "Generic (PLEG): container finished" podID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerID="8e0188370a74f1293b2acaf67ab6ee5b6ac3f308cf28a620efb702c7c15d44d9" exitCode=143 Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.261082 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerDied","Data":"8e0188370a74f1293b2acaf67ab6ee5b6ac3f308cf28a620efb702c7c15d44d9"} Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.277067 4857 generic.go:334] "Generic (PLEG): container finished" podID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerID="fd3f3e4c72979e1311976bba0362715519392668a8948f3eeb9614f335b3f82c" exitCode=0 Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.277147 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerDied","Data":"fd3f3e4c72979e1311976bba0362715519392668a8948f3eeb9614f335b3f82c"} Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.300974 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.761163 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:48 crc kubenswrapper[4857]: I0318 14:28:48.762162 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-975859b47-gfk64" Mar 18 14:28:49 crc kubenswrapper[4857]: I0318 14:28:49.511952 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:28:49 crc kubenswrapper[4857]: I0318 14:28:49.648916 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:28:49 crc kubenswrapper[4857]: I0318 14:28:49.649440 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="dnsmasq-dns" containerID="cri-o://36d50ea2228dadb5ab7fc4856e2ed844aba4a550acf02b4adfd6b9f2f0a5ecd6" gracePeriod=10 Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.117101 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203149 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203256 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8qq4\" (UniqueName: \"kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203472 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203509 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203709 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.203966 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.204055 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.204083 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts\") pod \"335ff3aa-581f-4043-81e1-82e3c52d784b\" (UID: \"335ff3aa-581f-4043-81e1-82e3c52d784b\") " Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.204266 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs" (OuterVolumeSpecName: "logs") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.205043 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.207467 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.872359 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4" (OuterVolumeSpecName: "kube-api-access-l8qq4") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "kube-api-access-l8qq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.892945 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.213:5353: connect: connection refused" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.904692 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8qq4\" (UniqueName: \"kubernetes.io/projected/335ff3aa-581f-4043-81e1-82e3c52d784b-kube-api-access-l8qq4\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.904724 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/335ff3aa-581f-4043-81e1-82e3c52d784b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:50 crc kubenswrapper[4857]: I0318 14:28:50.954776 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts" (OuterVolumeSpecName: "scripts") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.011310 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.021918 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.043505 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a" (OuterVolumeSpecName: "glance") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.055163 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerStarted","Data":"9a830badb0172585287d5a74e792e12535d6738175fe6bb896183812eb56e9ca"} Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.055433 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-central-agent" containerID="cri-o://de936398dfad06d25c2900a725d41a3fe1236f429a4963f99fd02fd2821adfac" gracePeriod=30 Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.055819 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.056377 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="proxy-httpd" containerID="cri-o://9a830badb0172585287d5a74e792e12535d6738175fe6bb896183812eb56e9ca" gracePeriod=30 Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.056449 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="sg-core" containerID="cri-o://f64321db60225e30892564f68ebd7e8290f3adbd0654128e0b032d54359cd1c2" gracePeriod=30 Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.056502 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-notification-agent" containerID="cri-o://1db29361e9749fadfc0b932964ddce8d3e87453c9057773e06b7661b8e13fbe3" gracePeriod=30 Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.098348 4857 generic.go:334] "Generic (PLEG): container finished" podID="2128d65a-6594-4f94-89be-6a552d89bf98" containerID="36d50ea2228dadb5ab7fc4856e2ed844aba4a550acf02b4adfd6b9f2f0a5ecd6" exitCode=0 Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.101901 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" event={"ID":"2128d65a-6594-4f94-89be-6a552d89bf98","Type":"ContainerDied","Data":"36d50ea2228dadb5ab7fc4856e2ed844aba4a550acf02b4adfd6b9f2f0a5ecd6"} Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.112032 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.072751715 podStartE2EDuration="42.112006365s" podCreationTimestamp="2026-03-18 14:28:09 +0000 UTC" firstStartedPulling="2026-03-18 14:28:10.675888423 +0000 UTC m=+1674.805016880" lastFinishedPulling="2026-03-18 14:28:49.715143073 +0000 UTC m=+1713.844271530" observedRunningTime="2026-03-18 14:28:51.094616947 +0000 UTC m=+1715.223745414" watchObservedRunningTime="2026-03-18 14:28:51.112006365 +0000 UTC m=+1715.241134812" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.113705 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") on node \"crc\" " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.113740 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.128430 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data" (OuterVolumeSpecName: "config-data") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.131997 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "335ff3aa-581f-4043-81e1-82e3c52d784b" (UID: "335ff3aa-581f-4043-81e1-82e3c52d784b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.132454 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"335ff3aa-581f-4043-81e1-82e3c52d784b","Type":"ContainerDied","Data":"b5fe113423c9d1def3607cc18f5911206ef6a47e0a4799da03adff264933f291"} Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.132523 4857 scope.go:117] "RemoveContainer" containerID="fd3f3e4c72979e1311976bba0362715519392668a8948f3eeb9614f335b3f82c" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.132734 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.228793 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.228829 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/335ff3aa-581f-4043-81e1-82e3c52d784b-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.320762 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.320953 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a") on node "crc" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.338370 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.434972 4857 scope.go:117] "RemoveContainer" containerID="bb2ab01878d4c92536a05e4d4e4a0e5dd770a5abc36c462aa7656ddd28f9558b" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.475662 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.537718 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.564564 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.582627 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:51 crc kubenswrapper[4857]: E0318 14:28:51.586736 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-log" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.586848 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-log" Mar 18 14:28:51 crc kubenswrapper[4857]: E0318 14:28:51.587340 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-httpd" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.587358 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-httpd" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.587815 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-log" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.587873 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" containerName="glance-httpd" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.590329 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.593171 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.594244 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.596392 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.603041 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.760169 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.760576 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.760683 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwcxl\" (UniqueName: \"kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.760857 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.761088 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.761131 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0\") pod \"2128d65a-6594-4f94-89be-6a552d89bf98\" (UID: \"2128d65a-6594-4f94-89be-6a552d89bf98\") " Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.761695 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.761778 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-scripts\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.762024 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.762177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ts4s\" (UniqueName: \"kubernetes.io/projected/49d4f556-2bf9-4361-989b-e4d191f7fee4-kube-api-access-6ts4s\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.762332 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.762359 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-logs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.762513 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-config-data\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.764457 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.812571 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.827012 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl" (OuterVolumeSpecName: "kube-api-access-cwcxl") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "kube-api-access-cwcxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.834802 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.903684 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.908609 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-logs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.909006 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-config-data\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.911314 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.913006 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.914879 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-scripts\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.910302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-logs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.917311 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.915098 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.928994 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/53a11717549d4e5fa20456445f0a3110867e942e65caa41580a09c0ef37f0f67/globalmount\"" pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.925767 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ts4s\" (UniqueName: \"kubernetes.io/projected/49d4f556-2bf9-4361-989b-e4d191f7fee4-kube-api-access-6ts4s\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.930957 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwcxl\" (UniqueName: \"kubernetes.io/projected/2128d65a-6594-4f94-89be-6a552d89bf98-kube-api-access-cwcxl\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.918203 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49d4f556-2bf9-4361-989b-e4d191f7fee4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.939573 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.940157 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-scripts\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.940845 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-config-data\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.941997 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49d4f556-2bf9-4361-989b-e4d191f7fee4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:51 crc kubenswrapper[4857]: I0318 14:28:51.958768 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ts4s\" (UniqueName: \"kubernetes.io/projected/49d4f556-2bf9-4361-989b-e4d191f7fee4-kube-api-access-6ts4s\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:52 crc kubenswrapper[4857]: W0318 14:28:52.023156 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1a34b9b_51dc_45e7_80a5_d7d2b27f4cde.slice/crio-a6614b24607e03938084cfa6f96f43e00952bee965e8e18cecc1e6037032b9a7 WatchSource:0}: Error finding container a6614b24607e03938084cfa6f96f43e00952bee965e8e18cecc1e6037032b9a7: Status 404 returned error can't find the container with id a6614b24607e03938084cfa6f96f43e00952bee965e8e18cecc1e6037032b9a7 Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.203061 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerStarted","Data":"f02dd999ffd96d9d3ad04894ee271a4e930c3af93096910ea6f0196039e7955f"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.560401 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"40f263c7-0bb2-473d-a658-41b6104343a9","Type":"ContainerStarted","Data":"709b0123a2c383eb99ebfc974a26c9e20a3c7659a7873329a36e1def4cc78455"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.579780 4857 generic.go:334] "Generic (PLEG): container finished" podID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerID="f64321db60225e30892564f68ebd7e8290f3adbd0654128e0b032d54359cd1c2" exitCode=2 Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.583279 4857 generic.go:334] "Generic (PLEG): container finished" podID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerID="1db29361e9749fadfc0b932964ddce8d3e87453c9057773e06b7661b8e13fbe3" exitCode=0 Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.583408 4857 generic.go:334] "Generic (PLEG): container finished" podID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerID="de936398dfad06d25c2900a725d41a3fe1236f429a4963f99fd02fd2821adfac" exitCode=0 Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.579948 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerDied","Data":"f64321db60225e30892564f68ebd7e8290f3adbd0654128e0b032d54359cd1c2"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.583689 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerDied","Data":"1db29361e9749fadfc0b932964ddce8d3e87453c9057773e06b7661b8e13fbe3"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.583791 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerDied","Data":"de936398dfad06d25c2900a725d41a3fe1236f429a4963f99fd02fd2821adfac"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.594150 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerStarted","Data":"1cf109cea8afa6eed50d10df458d1cb7724789061f4fc5ab6ec4a034c1d07e7d"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.594228 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8a87bd3-2d1f-4f9c-9c18-e3c8726fb04a\") pod \"glance-default-external-api-0\" (UID: \"49d4f556-2bf9-4361-989b-e4d191f7fee4\") " pod="openstack/glance-default-external-api-0" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.614490 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.614745 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-99884ddc-qwc56" event={"ID":"85d19c96-01eb-49d5-8240-825a53ed459d","Type":"ContainerStarted","Data":"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.615182 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.638693 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7795f8799-xsk4z" event={"ID":"afe9c8d8-6bd9-4958-b511-ddd797244400","Type":"ContainerStarted","Data":"f7028cc1a78e45b3d0410470c7432ef4f701f4a914da7726a0f49450036261ae"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.639235 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.639412 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.641688 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b7d67" podStartSLOduration=8.355546176 podStartE2EDuration="17.641666448s" podCreationTimestamp="2026-03-18 14:28:35 +0000 UTC" firstStartedPulling="2026-03-18 14:28:40.743871214 +0000 UTC m=+1704.872999661" lastFinishedPulling="2026-03-18 14:28:50.029991476 +0000 UTC m=+1714.159119933" observedRunningTime="2026-03-18 14:28:52.599390894 +0000 UTC m=+1716.728519351" watchObservedRunningTime="2026-03-18 14:28:52.641666448 +0000 UTC m=+1716.770794905" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.647195 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.26493488 podStartE2EDuration="40.647171747s" podCreationTimestamp="2026-03-18 14:28:12 +0000 UTC" firstStartedPulling="2026-03-18 14:28:13.628099244 +0000 UTC m=+1677.757227701" lastFinishedPulling="2026-03-18 14:28:50.010336111 +0000 UTC m=+1714.139464568" observedRunningTime="2026-03-18 14:28:52.638048407 +0000 UTC m=+1716.767176864" watchObservedRunningTime="2026-03-18 14:28:52.647171747 +0000 UTC m=+1716.776300194" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.672548 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dc49b6cff-qkjws" event={"ID":"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e","Type":"ContainerStarted","Data":"5e446f36887e2243a7f92967ac600a0b4350cb3b22879fcfd4137cc4723b27ec"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.675889 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.727117 4857 generic.go:334] "Generic (PLEG): container finished" podID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerID="b3fd04628ee0ea8357f8a3e4f567c32992da7103a255703b648fb7b0780cd02f" exitCode=0 Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.727278 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerDied","Data":"b3fd04628ee0ea8357f8a3e4f567c32992da7103a255703b648fb7b0780cd02f"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.741658 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.742121 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.742409 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.748864 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" event={"ID":"2128d65a-6594-4f94-89be-6a552d89bf98","Type":"ContainerDied","Data":"d78f3931be8a48e57fa7f1a7269d05f8b2dbc75522b59a098f70f6fc97accb25"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.748947 4857 scope.go:117] "RemoveContainer" containerID="36d50ea2228dadb5ab7fc4856e2ed844aba4a550acf02b4adfd6b9f2f0a5ecd6" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.749260 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jcn5v" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.759938 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerStarted","Data":"a6614b24607e03938084cfa6f96f43e00952bee965e8e18cecc1e6037032b9a7"} Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.772069 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config" (OuterVolumeSpecName: "config") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.789389 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2128d65a-6594-4f94-89be-6a552d89bf98" (UID: "2128d65a-6594-4f94-89be-6a552d89bf98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.798136 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7795f8799-xsk4z" podStartSLOduration=10.041556625 podStartE2EDuration="19.796280189s" podCreationTimestamp="2026-03-18 14:28:33 +0000 UTC" firstStartedPulling="2026-03-18 14:28:39.816303602 +0000 UTC m=+1703.945432049" lastFinishedPulling="2026-03-18 14:28:49.571027156 +0000 UTC m=+1713.700155613" observedRunningTime="2026-03-18 14:28:52.788617756 +0000 UTC m=+1716.917746203" watchObservedRunningTime="2026-03-18 14:28:52.796280189 +0000 UTC m=+1716.925408646" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.840450 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.841202 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-99884ddc-qwc56" podStartSLOduration=10.556715918 podStartE2EDuration="19.841164518s" podCreationTimestamp="2026-03-18 14:28:33 +0000 UTC" firstStartedPulling="2026-03-18 14:28:40.320551081 +0000 UTC m=+1704.449679538" lastFinishedPulling="2026-03-18 14:28:49.604999681 +0000 UTC m=+1713.734128138" observedRunningTime="2026-03-18 14:28:52.756321223 +0000 UTC m=+1716.885449680" watchObservedRunningTime="2026-03-18 14:28:52.841164518 +0000 UTC m=+1716.970292975" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.852991 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.853014 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2128d65a-6594-4f94-89be-6a552d89bf98-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:52 crc kubenswrapper[4857]: I0318 14:28:52.874915 4857 scope.go:117] "RemoveContainer" containerID="97a80ca89af75c970fec73c950de8e936c2041dfbb7b78f61bf66e3224352040" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.547600 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335ff3aa-581f-4043-81e1-82e3c52d784b" path="/var/lib/kubelet/pods/335ff3aa-581f-4043-81e1-82e3c52d784b/volumes" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.552611 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.552659 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jcn5v"] Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.795905 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.798250 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"67de56ed-3152-48fd-bd7e-be4d428e9d15","Type":"ContainerDied","Data":"6d2cf65e87d6ce173ea1860231b7077c759189d6c85f8f2181d8b7a364781e43"} Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.798447 4857 scope.go:117] "RemoveContainer" containerID="b3fd04628ee0ea8357f8a3e4f567c32992da7103a255703b648fb7b0780cd02f" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.883239 4857 scope.go:117] "RemoveContainer" containerID="95efc1e2da5135633bc35c5e3608d314bbca7554ba168575d360a4f598d51b5a" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951433 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhks8\" (UniqueName: \"kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951528 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951574 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951603 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951631 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951683 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951828 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.951867 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.956110 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.956995 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs" (OuterVolumeSpecName: "logs") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.968683 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8" (OuterVolumeSpecName: "kube-api-access-nhks8") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "kube-api-access-nhks8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:53 crc kubenswrapper[4857]: I0318 14:28:53.993947 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts" (OuterVolumeSpecName: "scripts") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.016363 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367" (OuterVolumeSpecName: "glance") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "pvc-a61b5137-25a0-4370-8b60-d456f1a37367". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.054978 4857 scope.go:117] "RemoveContainer" containerID="8e0188370a74f1293b2acaf67ab6ee5b6ac3f308cf28a620efb702c7c15d44d9" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.066903 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.071551 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") pod \"67de56ed-3152-48fd-bd7e-be4d428e9d15\" (UID: \"67de56ed-3152-48fd-bd7e-be4d428e9d15\") " Mar 18 14:28:54 crc kubenswrapper[4857]: W0318 14:28:54.072018 4857 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/67de56ed-3152-48fd-bd7e-be4d428e9d15/volumes/kubernetes.io~secret/combined-ca-bundle Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.072041 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130038 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130114 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") on node \"crc\" " Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130131 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhks8\" (UniqueName: \"kubernetes.io/projected/67de56ed-3152-48fd-bd7e-be4d428e9d15-kube-api-access-nhks8\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130143 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130152 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.130162 4857 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67de56ed-3152-48fd-bd7e-be4d428e9d15-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.163669 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.202720 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data" (OuterVolumeSpecName: "config-data") pod "67de56ed-3152-48fd-bd7e-be4d428e9d15" (UID: "67de56ed-3152-48fd-bd7e-be4d428e9d15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.205264 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.205528 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a61b5137-25a0-4370-8b60-d456f1a37367" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367") on node "crc" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.234577 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.234613 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.234625 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67de56ed-3152-48fd-bd7e-be4d428e9d15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.693154 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.790583 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.795644 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.798095 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bd6fd9d7b-xrcmg" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.903183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dc49b6cff-qkjws" event={"ID":"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e","Type":"ContainerStarted","Data":"f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670"} Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.914893 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.925295 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerStarted","Data":"7af0029b0a59f7ff5ce3f3899a4b15a1d497402aa467979362f8c5bc402dafaf"} Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.927236 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.952202 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49d4f556-2bf9-4361-989b-e4d191f7fee4","Type":"ContainerStarted","Data":"9b89b142742986b5cbfc25651a40085112fadb3280529df247851942c34b1566"} Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.957478 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.957796 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-9554cfcb4-bkg8z" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-log" containerID="cri-o://252c4e58550cf5b3f7e8fd0b7d9b61587ca003ba59c100fc33b795c0b12d82b4" gracePeriod=30 Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.957910 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-9554cfcb4-bkg8z" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-api" containerID="cri-o://929df1992cf0fc69308df086ad805f057fc3e19380f1aa24b71078b3d89de0ac" gracePeriod=30 Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.968072 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"38f691fd-1071-4bdd-a29a-e0b7ae81432e","Type":"ContainerStarted","Data":"ff95f6a24759185539ef27a2f87e50c067503ca54d2436a40e6ed1b4ee25119c"} Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.970770 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.988356 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podStartSLOduration=9.988336822 podStartE2EDuration="9.988336822s" podCreationTimestamp="2026-03-18 14:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:54.987568253 +0000 UTC m=+1719.116696700" watchObservedRunningTime="2026-03-18 14:28:54.988336822 +0000 UTC m=+1719.117465279" Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.998818 4857 generic.go:334] "Generic (PLEG): container finished" podID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerID="7b9bcb3defd40405c4b5517c5dcad6ef8e614e6373c37e96148b6cd35a260a54" exitCode=1 Mar 18 14:28:54 crc kubenswrapper[4857]: I0318 14:28:54.999226 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerDied","Data":"7b9bcb3defd40405c4b5517c5dcad6ef8e614e6373c37e96148b6cd35a260a54"} Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.000106 4857 scope.go:117] "RemoveContainer" containerID="7b9bcb3defd40405c4b5517c5dcad6ef8e614e6373c37e96148b6cd35a260a54" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.022510 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.023878 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5dc49b6cff-qkjws" podStartSLOduration=10.023855376 podStartE2EDuration="10.023855376s" podCreationTimestamp="2026-03-18 14:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:54.964393999 +0000 UTC m=+1719.093522456" watchObservedRunningTime="2026-03-18 14:28:55.023855376 +0000 UTC m=+1719.152983833" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.043990 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=16.043967112 podStartE2EDuration="16.043967112s" podCreationTimestamp="2026-03-18 14:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:28:55.035593211 +0000 UTC m=+1719.164721668" watchObservedRunningTime="2026-03-18 14:28:55.043967112 +0000 UTC m=+1719.173095569" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.166839 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:28:55 crc kubenswrapper[4857]: E0318 14:28:55.167425 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.238161 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" path="/var/lib/kubelet/pods/2128d65a-6594-4f94-89be-6a552d89bf98/volumes" Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.243191 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:55 crc kubenswrapper[4857]: I0318 14:28:55.990572 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:55.997279 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:55.998146 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.050451 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:56 crc kubenswrapper[4857]: E0318 14:28:56.070640 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-log" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.071002 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-log" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.071634 4857 scope.go:117] "RemoveContainer" containerID="7af0029b0a59f7ff5ce3f3899a4b15a1d497402aa467979362f8c5bc402dafaf" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.070687 4857 generic.go:334] "Generic (PLEG): container finished" podID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerID="7af0029b0a59f7ff5ce3f3899a4b15a1d497402aa467979362f8c5bc402dafaf" exitCode=1 Mar 18 14:28:56 crc kubenswrapper[4857]: E0318 14:28:56.086200 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="dnsmasq-dns" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.086428 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="dnsmasq-dns" Mar 18 14:28:56 crc kubenswrapper[4857]: E0318 14:28:56.086633 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-httpd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.086698 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-httpd" Mar 18 14:28:56 crc kubenswrapper[4857]: E0318 14:28:56.086868 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="init" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.086943 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="init" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.087544 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2128d65a-6594-4f94-89be-6a552d89bf98" containerName="dnsmasq-dns" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.098169 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-httpd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.099119 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" containerName="glance-log" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.102339 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.115475 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.115651 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.101279 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.126878 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerDied","Data":"7af0029b0a59f7ff5ce3f3899a4b15a1d497402aa467979362f8c5bc402dafaf"} Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.147115 4857 generic.go:334] "Generic (PLEG): container finished" podID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerID="252c4e58550cf5b3f7e8fd0b7d9b61587ca003ba59c100fc33b795c0b12d82b4" exitCode=143 Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.147240 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerDied","Data":"252c4e58550cf5b3f7e8fd0b7d9b61587ca003ba59c100fc33b795c0b12d82b4"} Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.181972 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.182286 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7795f8799-xsk4z" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerName="heat-cfnapi" containerID="cri-o://f7028cc1a78e45b3d0410470c7432ef4f701f4a914da7726a0f49450036261ae" gracePeriod=60 Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230255 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230353 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsdxx\" (UniqueName: \"kubernetes.io/projected/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-kube-api-access-dsdxx\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230382 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230500 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230526 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230764 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.230790 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-logs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.256962 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.257916 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-99884ddc-qwc56" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" containerName="heat-api" containerID="cri-o://36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11" gracePeriod=60 Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.283008 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.284935 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.288977 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.294948 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.301952 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.332884 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.334919 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336372 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336500 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336654 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsdxx\" (UniqueName: \"kubernetes.io/projected/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-kube-api-access-dsdxx\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336712 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336907 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.336956 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.337745 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.338072 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.340442 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.346171 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.346235 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-logs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.355933 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.356456 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce4ba88565bbf6eaffcfc19803bd3d9355ffb3d5f28210b890fc7555a4578986/globalmount\"" pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.356240 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-logs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.356839 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.357022 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.357211 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.361780 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsdxx\" (UniqueName: \"kubernetes.io/projected/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-kube-api-access-dsdxx\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.369440 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cc31bb4-6fa0-41fe-b292-9a9de2d9a581-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.374030 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.892582 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.892899 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893061 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gw2t\" (UniqueName: \"kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893161 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dptgf\" (UniqueName: \"kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893256 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893349 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893501 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893671 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.893817 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.925471 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.925790 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:56 crc kubenswrapper[4857]: I0318 14:28:56.925826 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.012568 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a61b5137-25a0-4370-8b60-d456f1a37367\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a61b5137-25a0-4370-8b60-d456f1a37367\") pod \"glance-default-internal-api-0\" (UID: \"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581\") " pod="openstack/glance-default-internal-api-0" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029700 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029778 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029863 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gw2t\" (UniqueName: \"kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029892 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptgf\" (UniqueName: \"kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029933 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.029968 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030057 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030185 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030216 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030298 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030351 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.030379 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.044080 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.045012 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.046573 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.047613 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.047824 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.048381 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.057151 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.061878 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.065309 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.065905 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gw2t\" (UniqueName: \"kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.067708 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs\") pod \"heat-api-6ff5fc6d6f-phz9q\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.080619 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptgf\" (UniqueName: \"kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf\") pod \"heat-cfnapi-6756b6568c-jbstd\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.107726 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7795f8799-xsk4z" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.224:8000/healthcheck\": read tcp 10.217.0.2:56128->10.217.0.224:8000: read: connection reset by peer" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.112674 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-99884ddc-qwc56" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.226:8004/healthcheck\": read tcp 10.217.0.2:37936->10.217.0.226:8004: read: connection reset by peer" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.150479 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.164987 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b7d67" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="registry-server" probeResult="failure" output=< Mar 18 14:28:57 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:28:57 crc kubenswrapper[4857]: > Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.192572 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67de56ed-3152-48fd-bd7e-be4d428e9d15" path="/var/lib/kubelet/pods/67de56ed-3152-48fd-bd7e-be4d428e9d15/volumes" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.230221 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.305857 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:28:57 crc kubenswrapper[4857]: I0318 14:28:57.403392 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:28:57 crc kubenswrapper[4857]: E0318 14:28:57.688422 4857 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.052452 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.072367 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcnbk\" (UniqueName: \"kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk\") pod \"85d19c96-01eb-49d5-8240-825a53ed459d\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.072558 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data\") pod \"85d19c96-01eb-49d5-8240-825a53ed459d\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.072890 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle\") pod \"85d19c96-01eb-49d5-8240-825a53ed459d\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.073036 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom\") pod \"85d19c96-01eb-49d5-8240-825a53ed459d\" (UID: \"85d19c96-01eb-49d5-8240-825a53ed459d\") " Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.083776 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85d19c96-01eb-49d5-8240-825a53ed459d" (UID: "85d19c96-01eb-49d5-8240-825a53ed459d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.090174 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk" (OuterVolumeSpecName: "kube-api-access-mcnbk") pod "85d19c96-01eb-49d5-8240-825a53ed459d" (UID: "85d19c96-01eb-49d5-8240-825a53ed459d"). InnerVolumeSpecName "kube-api-access-mcnbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.131961 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85d19c96-01eb-49d5-8240-825a53ed459d" (UID: "85d19c96-01eb-49d5-8240-825a53ed459d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.169929 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data" (OuterVolumeSpecName: "config-data") pod "85d19c96-01eb-49d5-8240-825a53ed459d" (UID: "85d19c96-01eb-49d5-8240-825a53ed459d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.176869 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.176915 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.176930 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcnbk\" (UniqueName: \"kubernetes.io/projected/85d19c96-01eb-49d5-8240-825a53ed459d-kube-api-access-mcnbk\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.176947 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85d19c96-01eb-49d5-8240-825a53ed459d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.187232 4857 generic.go:334] "Generic (PLEG): container finished" podID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerID="f7028cc1a78e45b3d0410470c7432ef4f701f4a914da7726a0f49450036261ae" exitCode=0 Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.187312 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7795f8799-xsk4z" event={"ID":"afe9c8d8-6bd9-4958-b511-ddd797244400","Type":"ContainerDied","Data":"f7028cc1a78e45b3d0410470c7432ef4f701f4a914da7726a0f49450036261ae"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.213668 4857 generic.go:334] "Generic (PLEG): container finished" podID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" exitCode=1 Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.213879 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerDied","Data":"5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.213991 4857 scope.go:117] "RemoveContainer" containerID="7b9bcb3defd40405c4b5517c5dcad6ef8e614e6373c37e96148b6cd35a260a54" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.215534 4857 scope.go:117] "RemoveContainer" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" Mar 18 14:28:58 crc kubenswrapper[4857]: E0318 14:28:58.216000 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7464859c55-9455r_openstack(d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde)\"" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.232699 4857 generic.go:334] "Generic (PLEG): container finished" podID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" exitCode=1 Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.232958 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerDied","Data":"1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.234350 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:28:58 crc kubenswrapper[4857]: E0318 14:28:58.235691 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5b9fd4bc48-dqzj8_openstack(dc868497-4c53-4e0b-9e06-c1b55bb777d2)\"" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.247095 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49d4f556-2bf9-4361-989b-e4d191f7fee4","Type":"ContainerStarted","Data":"7109f7c7ef0ed517501eea3ca51940529297832bf1866aebe7a86bc180fd8c07"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.258139 4857 generic.go:334] "Generic (PLEG): container finished" podID="85d19c96-01eb-49d5-8240-825a53ed459d" containerID="36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11" exitCode=0 Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.258217 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-99884ddc-qwc56" event={"ID":"85d19c96-01eb-49d5-8240-825a53ed459d","Type":"ContainerDied","Data":"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.258257 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-99884ddc-qwc56" event={"ID":"85d19c96-01eb-49d5-8240-825a53ed459d","Type":"ContainerDied","Data":"d38df2c985d409ae130538fdfe3feac05deb4c99ad3273b61c244f6ac2f558d5"} Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.258301 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-99884ddc-qwc56" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.302464 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.302691 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.312321 4857 scope.go:117] "RemoveContainer" containerID="7af0029b0a59f7ff5ce3f3899a4b15a1d497402aa467979362f8c5bc402dafaf" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.407623 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.435935 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-99884ddc-qwc56"] Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.483350 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.518946 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.850475 4857 scope.go:117] "RemoveContainer" containerID="36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11" Mar 18 14:28:58 crc kubenswrapper[4857]: I0318 14:28:58.912601 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.169284 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.213552 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" path="/var/lib/kubelet/pods/85d19c96-01eb-49d5-8240-825a53ed459d/volumes" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.280425 4857 scope.go:117] "RemoveContainer" containerID="36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11" Mar 18 14:28:59 crc kubenswrapper[4857]: E0318 14:28:59.285674 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11\": container with ID starting with 36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11 not found: ID does not exist" containerID="36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.285779 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11"} err="failed to get container status \"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11\": rpc error: code = NotFound desc = could not find container \"36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11\": container with ID starting with 36b9caa6eda5c0dae13607299fb314a1fae3f156c7ecb904ea6c9b5bcd981a11 not found: ID does not exist" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.311106 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:28:59 crc kubenswrapper[4857]: E0318 14:28:59.311572 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5b9fd4bc48-dqzj8_openstack(dc868497-4c53-4e0b-9e06-c1b55bb777d2)\"" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.318813 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff5fc6d6f-phz9q" event={"ID":"f8bffa05-4039-4fa4-b173-8fc1cfa492c9","Type":"ContainerStarted","Data":"edbd22bf2e47b5491c9c259fa1c102b1dc9f3b2ae9c4de7e45b41205587a9437"} Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.337129 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6756b6568c-jbstd" event={"ID":"b715c731-2351-42c5-9f06-d99258f15771","Type":"ContainerStarted","Data":"46978f420aa1faf39b758c9dbdfd0b58c7039c58983405184f784efed02fb5ae"} Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.348392 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data\") pod \"afe9c8d8-6bd9-4958-b511-ddd797244400\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.348670 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom\") pod \"afe9c8d8-6bd9-4958-b511-ddd797244400\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.348726 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle\") pod \"afe9c8d8-6bd9-4958-b511-ddd797244400\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.348798 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwcrs\" (UniqueName: \"kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs\") pod \"afe9c8d8-6bd9-4958-b511-ddd797244400\" (UID: \"afe9c8d8-6bd9-4958-b511-ddd797244400\") " Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.378501 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs" (OuterVolumeSpecName: "kube-api-access-xwcrs") pod "afe9c8d8-6bd9-4958-b511-ddd797244400" (UID: "afe9c8d8-6bd9-4958-b511-ddd797244400"). InnerVolumeSpecName "kube-api-access-xwcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.391004 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "afe9c8d8-6bd9-4958-b511-ddd797244400" (UID: "afe9c8d8-6bd9-4958-b511-ddd797244400"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.441076 4857 generic.go:334] "Generic (PLEG): container finished" podID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerID="929df1992cf0fc69308df086ad805f057fc3e19380f1aa24b71078b3d89de0ac" exitCode=0 Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.441189 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerDied","Data":"929df1992cf0fc69308df086ad805f057fc3e19380f1aa24b71078b3d89de0ac"} Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.462764 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.479735 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581","Type":"ContainerStarted","Data":"fa338592b000118d2220dce439f09f3105ea1dcc16cd9e61de8d5e380de04d82"} Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.502799 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwcrs\" (UniqueName: \"kubernetes.io/projected/afe9c8d8-6bd9-4958-b511-ddd797244400-kube-api-access-xwcrs\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.860959 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7795f8799-xsk4z" event={"ID":"afe9c8d8-6bd9-4958-b511-ddd797244400","Type":"ContainerDied","Data":"ca70e3f818bd9114875e1fe541eabc4a2c1b94e73f38e2f54625874533325a46"} Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.861035 4857 scope.go:117] "RemoveContainer" containerID="f7028cc1a78e45b3d0410470c7432ef4f701f4a914da7726a0f49450036261ae" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.861248 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7795f8799-xsk4z" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.867240 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe9c8d8-6bd9-4958-b511-ddd797244400" (UID: "afe9c8d8-6bd9-4958-b511-ddd797244400"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.867936 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.956038 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data" (OuterVolumeSpecName: "config-data") pod "afe9c8d8-6bd9-4958-b511-ddd797244400" (UID: "afe9c8d8-6bd9-4958-b511-ddd797244400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:28:59 crc kubenswrapper[4857]: I0318 14:28:59.986095 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe9c8d8-6bd9-4958-b511-ddd797244400-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.007785 4857 scope.go:117] "RemoveContainer" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" Mar 18 14:29:00 crc kubenswrapper[4857]: E0318 14:29:00.012088 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7464859c55-9455r_openstack(d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde)\"" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.074272 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087319 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087598 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087630 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087679 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77nbf\" (UniqueName: \"kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087699 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087775 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.087799 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs\") pod \"829a18fa-de4c-47b1-b774-d8a43b8b085d\" (UID: \"829a18fa-de4c-47b1-b774-d8a43b8b085d\") " Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.089669 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs" (OuterVolumeSpecName: "logs") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.092186 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts" (OuterVolumeSpecName: "scripts") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.094713 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf" (OuterVolumeSpecName: "kube-api-access-77nbf") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "kube-api-access-77nbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.197057 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.201306 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77nbf\" (UniqueName: \"kubernetes.io/projected/829a18fa-de4c-47b1-b774-d8a43b8b085d-kube-api-access-77nbf\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.204524 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/829a18fa-de4c-47b1-b774-d8a43b8b085d-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.243991 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.256995 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7795f8799-xsk4z"] Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.561218 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.602202 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data" (OuterVolumeSpecName: "config-data") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.615783 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.615816 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.657924 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.688975 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "829a18fa-de4c-47b1-b774-d8a43b8b085d" (UID: "829a18fa-de4c-47b1-b774-d8a43b8b085d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.717780 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:00 crc kubenswrapper[4857]: I0318 14:29:00.717816 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/829a18fa-de4c-47b1-b774-d8a43b8b085d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.016921 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9554cfcb4-bkg8z" event={"ID":"829a18fa-de4c-47b1-b774-d8a43b8b085d","Type":"ContainerDied","Data":"68dbe674c6b0b1e8c1572f8e0ff6f512c7ad0c761b736e13afb9f077b55ea666"} Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.016981 4857 scope.go:117] "RemoveContainer" containerID="929df1992cf0fc69308df086ad805f057fc3e19380f1aa24b71078b3d89de0ac" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.017211 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9554cfcb4-bkg8z" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.039485 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff5fc6d6f-phz9q" event={"ID":"f8bffa05-4039-4fa4-b173-8fc1cfa492c9","Type":"ContainerStarted","Data":"e7b280aa31ad8500d2907451d0a9345096499dc0024e4a8ff967cecc55c8fd9c"} Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.040331 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.045293 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6756b6568c-jbstd" event={"ID":"b715c731-2351-42c5-9f06-d99258f15771","Type":"ContainerStarted","Data":"6d3879d227cb7eee1fa08208d8c66b82651f3cda020e3e56abf8ef13c9b4c261"} Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.046837 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.052408 4857 scope.go:117] "RemoveContainer" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.052685 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"49d4f556-2bf9-4361-989b-e4d191f7fee4","Type":"ContainerStarted","Data":"5ba47ea82d448eda1d0b9077fe6be39bfabf51c3899a4a65100d18902dabdb08"} Mar 18 14:29:01 crc kubenswrapper[4857]: E0318 14:29:01.052989 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7464859c55-9455r_openstack(d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde)\"" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.082618 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podStartSLOduration=5.082595612 podStartE2EDuration="5.082595612s" podCreationTimestamp="2026-03-18 14:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:01.070843476 +0000 UTC m=+1725.199971933" watchObservedRunningTime="2026-03-18 14:29:01.082595612 +0000 UTC m=+1725.211724069" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.108413 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.108393721 podStartE2EDuration="10.108393721s" podCreationTimestamp="2026-03-18 14:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:01.106659197 +0000 UTC m=+1725.235787654" watchObservedRunningTime="2026-03-18 14:29:01.108393721 +0000 UTC m=+1725.237522178" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.122959 4857 scope.go:117] "RemoveContainer" containerID="252c4e58550cf5b3f7e8fd0b7d9b61587ca003ba59c100fc33b795c0b12d82b4" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.147769 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podStartSLOduration=5.147738191 podStartE2EDuration="5.147738191s" podCreationTimestamp="2026-03-18 14:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:01.144286914 +0000 UTC m=+1725.273415371" watchObservedRunningTime="2026-03-18 14:29:01.147738191 +0000 UTC m=+1725.276866638" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.192253 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" path="/var/lib/kubelet/pods/afe9c8d8-6bd9-4958-b511-ddd797244400/volumes" Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.207790 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:29:01 crc kubenswrapper[4857]: I0318 14:29:01.222205 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9554cfcb4-bkg8z"] Mar 18 14:29:02 crc kubenswrapper[4857]: I0318 14:29:02.066598 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581","Type":"ContainerStarted","Data":"d931459071d03cfef7d800f4e3737e6fac701b3eada1bf7f2db6c514e9e7d183"} Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:02.596898 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:02.597800 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.007293 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.007331 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.022482 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:29:03 crc kubenswrapper[4857]: E0318 14:29:03.022915 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5b9fd4bc48-dqzj8_openstack(dc868497-4c53-4e0b-9e06-c1b55bb777d2)\"" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.098688 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.131444 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4cc31bb4-6fa0-41fe-b292-9a9de2d9a581","Type":"ContainerStarted","Data":"2e4d4328a4bb9969d2872c664cd83c97d70cd7e25db7142586f4a3e3498452cf"} Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.133606 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.140550 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.141359 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 18 14:29:03 crc kubenswrapper[4857]: E0318 14:29:03.145071 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5b9fd4bc48-dqzj8_openstack(dc868497-4c53-4e0b-9e06-c1b55bb777d2)\"" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.164733 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.164707287 podStartE2EDuration="8.164707287s" podCreationTimestamp="2026-03-18 14:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:03.161567828 +0000 UTC m=+1727.290696285" watchObservedRunningTime="2026-03-18 14:29:03.164707287 +0000 UTC m=+1727.293835744" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.208527 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" path="/var/lib/kubelet/pods/829a18fa-de4c-47b1-b774-d8a43b8b085d/volumes" Mar 18 14:29:03 crc kubenswrapper[4857]: I0318 14:29:03.230129 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="38f691fd-1071-4bdd-a29a-e0b7ae81432e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:04 crc kubenswrapper[4857]: I0318 14:29:04.176545 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 18 14:29:05 crc kubenswrapper[4857]: I0318 14:29:05.212987 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="38f691fd-1071-4bdd-a29a-e0b7ae81432e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.636077 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.656323 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.742343 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2bj5j"] Mar 18 14:29:06 crc kubenswrapper[4857]: E0318 14:29:06.747432 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-log" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.747582 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-log" Mar 18 14:29:06 crc kubenswrapper[4857]: E0318 14:29:06.747671 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerName="heat-cfnapi" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.747736 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerName="heat-cfnapi" Mar 18 14:29:06 crc kubenswrapper[4857]: E0318 14:29:06.747920 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" containerName="heat-api" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.747994 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" containerName="heat-api" Mar 18 14:29:06 crc kubenswrapper[4857]: E0318 14:29:06.748078 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-api" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.748144 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-api" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.749177 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d19c96-01eb-49d5-8240-825a53ed459d" containerName="heat-api" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.749281 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-log" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.749369 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="829a18fa-de4c-47b1-b774-d8a43b8b085d" containerName="placement-api" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.749495 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe9c8d8-6bd9-4958-b511-ddd797244400" containerName="heat-cfnapi" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.750958 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.769105 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2bj5j"] Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.854896 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.866422 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9pc5\" (UniqueName: \"kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.866616 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.870481 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dt4sv"] Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.872838 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.892819 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dd68-account-create-update-mlwzf"] Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.895198 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.900203 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.921787 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dt4sv"] Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.943512 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dd68-account-create-update-mlwzf"] Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.968885 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.969014 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpr7w\" (UniqueName: \"kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.969118 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9pc5\" (UniqueName: \"kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.969166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bhb\" (UniqueName: \"kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.969211 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.969245 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:06 crc kubenswrapper[4857]: I0318 14:29:06.983572 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.022438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9pc5\" (UniqueName: \"kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5\") pod \"nova-api-db-create-2bj5j\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.047563 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z52bw"] Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.050015 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072193 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072254 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntkln\" (UniqueName: \"kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072336 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpr7w\" (UniqueName: \"kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bhb\" (UniqueName: \"kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072550 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.072688 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.073321 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.073999 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.093802 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z52bw"] Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.107689 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bhb\" (UniqueName: \"kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb\") pod \"nova-cell0-db-create-dt4sv\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.118471 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpr7w\" (UniqueName: \"kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w\") pod \"nova-api-dd68-account-create-update-mlwzf\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:07 crc kubenswrapper[4857]: I0318 14:29:07.121323 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-056e-account-create-update-lsd7h"] Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.028965 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.057486 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.060062 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.060685 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.068781 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntkln\" (UniqueName: \"kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.109403 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntkln\" (UniqueName: \"kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.140731 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.145211 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.145248 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-056e-account-create-update-lsd7h"] Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.145270 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.145285 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cdc0-account-create-update-9lftd"] Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.147342 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.147494 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.148259 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.150576 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 18 14:29:08 crc kubenswrapper[4857]: E0318 14:29:08.171484 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.178181 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cdc0-account-create-update-9lftd"] Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.181773 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.192347 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts\") pod \"nova-cell1-db-create-z52bw\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.198585 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.215963 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.238690 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="38f691fd-1071-4bdd-a29a-e0b7ae81432e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.239620 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.582378 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.585383 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.585566 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8htgk\" (UniqueName: \"kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.585652 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh8sk\" (UniqueName: \"kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.585767 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.697892 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.698449 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.698650 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8htgk\" (UniqueName: \"kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.698825 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh8sk\" (UniqueName: \"kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:08 crc kubenswrapper[4857]: I0318 14:29:08.698896 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:08.724044 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.152416 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.152800 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-8dbd8fb56-f2qm7" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" containerID="cri-o://78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" gracePeriod=60 Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.170475 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8htgk\" (UniqueName: \"kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk\") pod \"nova-cell1-cdc0-account-create-update-9lftd\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.173197 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh8sk\" (UniqueName: \"kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk\") pod \"nova-cell0-056e-account-create-update-lsd7h\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.230646 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b7d67" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="registry-server" containerID="cri-o://f02dd999ffd96d9d3ad04894ee271a4e930c3af93096910ea6f0196039e7955f" gracePeriod=2 Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.231908 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.231937 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.436256 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.450814 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.818529 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2bj5j"] Mar 18 14:29:09 crc kubenswrapper[4857]: I0318 14:29:09.950137 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dt4sv"] Mar 18 14:29:09 crc kubenswrapper[4857]: W0318 14:29:09.958104 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod414737ac_b39d_4b54_bd95_2c8448fd22dc.slice/crio-f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b WatchSource:0}: Error finding container f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b: Status 404 returned error can't find the container with id f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b Mar 18 14:29:10 crc kubenswrapper[4857]: I0318 14:29:10.063650 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:29:10 crc kubenswrapper[4857]: I0318 14:29:10.220713 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="38f691fd-1071-4bdd-a29a-e0b7ae81432e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.228:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:10 crc kubenswrapper[4857]: I0318 14:29:10.243787 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z52bw"] Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.298666 4857 generic.go:334] "Generic (PLEG): container finished" podID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerID="f02dd999ffd96d9d3ad04894ee271a4e930c3af93096910ea6f0196039e7955f" exitCode=0 Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.340222 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2bj5j" event={"ID":"51ad3374-1103-4d14-a250-0efcbc82abf8","Type":"ContainerStarted","Data":"fd271d802bdc7e10c241dbbeeb10b98af5de5f9bed214bab4c2e4514a992b5ff"} Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.340260 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dt4sv" event={"ID":"414737ac-b39d-4b54-bd95-2c8448fd22dc","Type":"ContainerStarted","Data":"f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b"} Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.340273 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerDied","Data":"f02dd999ffd96d9d3ad04894ee271a4e930c3af93096910ea6f0196039e7955f"} Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.340319 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 18 14:29:11 crc kubenswrapper[4857]: I0318 14:29:11.340331 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z52bw" event={"ID":"25041d81-4986-400d-b6bc-eab23db1550f","Type":"ContainerStarted","Data":"978215fe584d842d45447162c4b81e3c21d23f921ff625f56c6b6a29f80fb799"} Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.033442 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-056e-account-create-update-lsd7h"] Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.141033 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dd68-account-create-update-mlwzf"] Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.164574 4857 scope.go:117] "RemoveContainer" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.177405 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cdc0-account-create-update-9lftd"] Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.260359 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.352001 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dd68-account-create-update-mlwzf" event={"ID":"e4c1ef3d-cd3d-405d-a484-af6052f4a291","Type":"ContainerStarted","Data":"4592ce0f34be34807d5593c749734a63fb06a102e07c1e8708ff4571f6c57664"} Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.356829 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2bj5j" event={"ID":"51ad3374-1103-4d14-a250-0efcbc82abf8","Type":"ContainerStarted","Data":"f0611cf9711a6192a725046c208a601db4a23533488b88f34572aee00e808023"} Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.374308 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b7d67" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.374316 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b7d67" event={"ID":"b34cc331-1dee-4d42-8824-d91dbf40e144","Type":"ContainerDied","Data":"bdcdc30d20cb25670ef308e8baaf25d66fcc338726a0df22f1a94041b04dfc7b"} Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.374381 4857 scope.go:117] "RemoveContainer" containerID="f02dd999ffd96d9d3ad04894ee271a4e930c3af93096910ea6f0196039e7955f" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.382153 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.382186 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.383501 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" event={"ID":"4c281c9a-b573-4e11-acfa-f15205eb5f58","Type":"ContainerStarted","Data":"c25ffe9d862d777bea095b1159c4a2d8cd3482c861cd734686eea63e2451f024"} Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.418781 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content\") pod \"b34cc331-1dee-4d42-8824-d91dbf40e144\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.418882 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vj68\" (UniqueName: \"kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68\") pod \"b34cc331-1dee-4d42-8824-d91dbf40e144\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.418997 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities\") pod \"b34cc331-1dee-4d42-8824-d91dbf40e144\" (UID: \"b34cc331-1dee-4d42-8824-d91dbf40e144\") " Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.420162 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities" (OuterVolumeSpecName: "utilities") pod "b34cc331-1dee-4d42-8824-d91dbf40e144" (UID: "b34cc331-1dee-4d42-8824-d91dbf40e144"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.434031 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68" (OuterVolumeSpecName: "kube-api-access-4vj68") pod "b34cc331-1dee-4d42-8824-d91dbf40e144" (UID: "b34cc331-1dee-4d42-8824-d91dbf40e144"). InnerVolumeSpecName "kube-api-access-4vj68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.450976 4857 scope.go:117] "RemoveContainer" containerID="8f6cce8905926ddebdca984fc7f18b005d716f8ba879848bc6d0080ec86bd7d0" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.483099 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b34cc331-1dee-4d42-8824-d91dbf40e144" (UID: "b34cc331-1dee-4d42-8824-d91dbf40e144"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.534329 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.534379 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b34cc331-1dee-4d42-8824-d91dbf40e144-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:12 crc kubenswrapper[4857]: I0318 14:29:12.534404 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vj68\" (UniqueName: \"kubernetes.io/projected/b34cc331-1dee-4d42-8824-d91dbf40e144-kube-api-access-4vj68\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.100881 4857 scope.go:117] "RemoveContainer" containerID="0682e5a05e1ca3d0f46a367825606938d9c73f4bc531e1e63c9ce86d2cfd9bc2" Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.226494 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.235556 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b7d67"] Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.395032 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" event={"ID":"c09c7b6e-3108-4aab-8597-20e2f835cb63","Type":"ContainerStarted","Data":"055405a7694e90e6d0711df33d556611b9d475199e7fd25c07364efd602813c2"} Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.397298 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dt4sv" event={"ID":"414737ac-b39d-4b54-bd95-2c8448fd22dc","Type":"ContainerStarted","Data":"be924bdfc117537af15108773f109c8ad3095151d27b4d0f9524ac83e25be840"} Mar 18 14:29:13 crc kubenswrapper[4857]: I0318 14:29:13.401615 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z52bw" event={"ID":"25041d81-4986-400d-b6bc-eab23db1550f","Type":"ContainerStarted","Data":"4b41effda7dc3e901c036a53169496b4790ec2243a7e696aee78a02dd9a6bc97"} Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.166278 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:29:14 crc kubenswrapper[4857]: E0318 14:29:14.263673 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:14 crc kubenswrapper[4857]: E0318 14:29:14.265304 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:14 crc kubenswrapper[4857]: E0318 14:29:14.266589 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:14 crc kubenswrapper[4857]: E0318 14:29:14.266643 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8dbd8fb56-f2qm7" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.487192 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" event={"ID":"4c281c9a-b573-4e11-acfa-f15205eb5f58","Type":"ContainerStarted","Data":"149f1e15e3daf3f8f14ff6ddc3ec387b2e26a80b899edeed9016e44af707ee88"} Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.502439 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dd68-account-create-update-mlwzf" event={"ID":"e4c1ef3d-cd3d-405d-a484-af6052f4a291","Type":"ContainerStarted","Data":"808aa0068c3f7277556505fb5d649cf50f221c456cd38a542096ae3a19529c92"} Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.507240 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerStarted","Data":"079d2d27256a1caf76fbcb5ed141e88a671177d3a1a4b62c697ccff076b1989e"} Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.508702 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.510801 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" event={"ID":"c09c7b6e-3108-4aab-8597-20e2f835cb63","Type":"ContainerStarted","Data":"6218698e97e32d7301aaaf93042d3693fece60bfe01bbc0a99f2f988998c89ac"} Mar 18 14:29:14 crc kubenswrapper[4857]: I0318 14:29:14.925215 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" podStartSLOduration=7.925171096 podStartE2EDuration="7.925171096s" podCreationTimestamp="2026-03-18 14:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:14.915979504 +0000 UTC m=+1739.045107971" watchObservedRunningTime="2026-03-18 14:29:14.925171096 +0000 UTC m=+1739.054299553" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.573765 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-2bj5j" podStartSLOduration=9.573727247 podStartE2EDuration="9.573727247s" podCreationTimestamp="2026-03-18 14:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.543762983 +0000 UTC m=+1739.672891440" watchObservedRunningTime="2026-03-18 14:29:15.573727247 +0000 UTC m=+1739.702855704" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.646309 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" path="/var/lib/kubelet/pods/b34cc331-1dee-4d42-8824-d91dbf40e144/volumes" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.669615 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-dd68-account-create-update-mlwzf" podStartSLOduration=9.669583959 podStartE2EDuration="9.669583959s" podCreationTimestamp="2026-03-18 14:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.588857987 +0000 UTC m=+1739.717986444" watchObservedRunningTime="2026-03-18 14:29:15.669583959 +0000 UTC m=+1739.798712416" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.705551 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dt4sv" podStartSLOduration=9.705521563 podStartE2EDuration="9.705521563s" podCreationTimestamp="2026-03-18 14:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.617541809 +0000 UTC m=+1739.746670266" watchObservedRunningTime="2026-03-18 14:29:15.705521563 +0000 UTC m=+1739.834650010" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.719332 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7464859c55-9455r" podStartSLOduration=30.71931243 podStartE2EDuration="30.71931243s" podCreationTimestamp="2026-03-18 14:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.643512813 +0000 UTC m=+1739.772641270" watchObservedRunningTime="2026-03-18 14:29:15.71931243 +0000 UTC m=+1739.848440877" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.730955 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-z52bw" podStartSLOduration=9.730936773 podStartE2EDuration="9.730936773s" podCreationTimestamp="2026-03-18 14:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.668919642 +0000 UTC m=+1739.798048099" watchObservedRunningTime="2026-03-18 14:29:15.730936773 +0000 UTC m=+1739.860065230" Mar 18 14:29:15 crc kubenswrapper[4857]: I0318 14:29:15.739010 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" podStartSLOduration=7.738998746 podStartE2EDuration="7.738998746s" podCreationTimestamp="2026-03-18 14:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:29:15.691697115 +0000 UTC m=+1739.820825562" watchObservedRunningTime="2026-03-18 14:29:15.738998746 +0000 UTC m=+1739.868127203" Mar 18 14:29:16 crc kubenswrapper[4857]: I0318 14:29:16.645927 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerStarted","Data":"73706bfa117cdd94eaf206ec3aa560ee5da32b4867b5d9b836538e9d56f2975b"} Mar 18 14:29:16 crc kubenswrapper[4857]: I0318 14:29:16.646420 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:29:17 crc kubenswrapper[4857]: I0318 14:29:17.668913 4857 generic.go:334] "Generic (PLEG): container finished" podID="25041d81-4986-400d-b6bc-eab23db1550f" containerID="4b41effda7dc3e901c036a53169496b4790ec2243a7e696aee78a02dd9a6bc97" exitCode=0 Mar 18 14:29:17 crc kubenswrapper[4857]: I0318 14:29:17.669136 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z52bw" event={"ID":"25041d81-4986-400d-b6bc-eab23db1550f","Type":"ContainerDied","Data":"4b41effda7dc3e901c036a53169496b4790ec2243a7e696aee78a02dd9a6bc97"} Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.138019 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.138558 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.203050 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.203135 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.304198 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.231:8000/healthcheck\": dial tcp 10.217.0.231:8000: connect: connection refused" Mar 18 14:29:18 crc kubenswrapper[4857]: I0318 14:29:18.304310 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.231:8000/healthcheck\": dial tcp 10.217.0.231:8000: connect: connection refused" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.343270 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.497006 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntkln\" (UniqueName: \"kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln\") pod \"25041d81-4986-400d-b6bc-eab23db1550f\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.497413 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts\") pod \"25041d81-4986-400d-b6bc-eab23db1550f\" (UID: \"25041d81-4986-400d-b6bc-eab23db1550f\") " Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.497895 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25041d81-4986-400d-b6bc-eab23db1550f" (UID: "25041d81-4986-400d-b6bc-eab23db1550f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.498258 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25041d81-4986-400d-b6bc-eab23db1550f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.526068 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln" (OuterVolumeSpecName: "kube-api-access-ntkln") pod "25041d81-4986-400d-b6bc-eab23db1550f" (UID: "25041d81-4986-400d-b6bc-eab23db1550f"). InnerVolumeSpecName "kube-api-access-ntkln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.600915 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntkln\" (UniqueName: \"kubernetes.io/projected/25041d81-4986-400d-b6bc-eab23db1550f-kube-api-access-ntkln\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.718465 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z52bw" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.718464 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z52bw" event={"ID":"25041d81-4986-400d-b6bc-eab23db1550f","Type":"ContainerDied","Data":"978215fe584d842d45447162c4b81e3c21d23f921ff625f56c6b6a29f80fb799"} Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.719358 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978215fe584d842d45447162c4b81e3c21d23f921ff625f56c6b6a29f80fb799" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.721789 4857 generic.go:334] "Generic (PLEG): container finished" podID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerID="079d2d27256a1caf76fbcb5ed141e88a671177d3a1a4b62c697ccff076b1989e" exitCode=1 Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.721903 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerDied","Data":"079d2d27256a1caf76fbcb5ed141e88a671177d3a1a4b62c697ccff076b1989e"} Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.721964 4857 scope.go:117] "RemoveContainer" containerID="5a84a5d530e71d088849f1d34327344718e02c68556195e719c83bafa7df98e3" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.722955 4857 scope.go:117] "RemoveContainer" containerID="079d2d27256a1caf76fbcb5ed141e88a671177d3a1a4b62c697ccff076b1989e" Mar 18 14:29:19 crc kubenswrapper[4857]: E0318 14:29:19.723564 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-cfnapi pod=heat-cfnapi-7464859c55-9455r_openstack(d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde)\"" pod="openstack/heat-cfnapi-7464859c55-9455r" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.727042 4857 generic.go:334] "Generic (PLEG): container finished" podID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerID="73706bfa117cdd94eaf206ec3aa560ee5da32b4867b5d9b836538e9d56f2975b" exitCode=1 Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.727232 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerDied","Data":"73706bfa117cdd94eaf206ec3aa560ee5da32b4867b5d9b836538e9d56f2975b"} Mar 18 14:29:19 crc kubenswrapper[4857]: I0318 14:29:19.728354 4857 scope.go:117] "RemoveContainer" containerID="73706bfa117cdd94eaf206ec3aa560ee5da32b4867b5d9b836538e9d56f2975b" Mar 18 14:29:19 crc kubenswrapper[4857]: E0318 14:29:19.728905 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-api pod=heat-api-5b9fd4bc48-dqzj8_openstack(dc868497-4c53-4e0b-9e06-c1b55bb777d2)\"" pod="openstack/heat-api-5b9fd4bc48-dqzj8" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" Mar 18 14:29:20 crc kubenswrapper[4857]: I0318 14:29:20.186331 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:29:20 crc kubenswrapper[4857]: I0318 14:29:20.208849 4857 scope.go:117] "RemoveContainer" containerID="1e03c4cc36d44ae063df2922f2ed27c3b238dfff3baf4f2f1d89e84042c3cda8" Mar 18 14:29:20 crc kubenswrapper[4857]: I0318 14:29:20.313151 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:29:20 crc kubenswrapper[4857]: I0318 14:29:20.332639 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:29:20 crc kubenswrapper[4857]: I0318 14:29:20.465588 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.645202 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:29:23 crc kubenswrapper[4857]: E0318 14:29:23.645948 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.649351 4857 generic.go:334] "Generic (PLEG): container finished" podID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerID="9a830badb0172585287d5a74e792e12535d6738175fe6bb896183812eb56e9ca" exitCode=137 Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.641704 4857 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-4cprr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.652728 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podUID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.650080 4857 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gwqfj container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.652915 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podUID="45ebdaa4-576e-40b7-810d-0f4fc570125d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.658989 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.659051 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:29:23 crc kubenswrapper[4857]: I0318 14:29:23.728905 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerDied","Data":"9a830badb0172585287d5a74e792e12535d6738175fe6bb896183812eb56e9ca"} Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.131057 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.135955 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data\") pod \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.136076 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle\") pod \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.138133 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r9hh\" (UniqueName: \"kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh\") pod \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.138342 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom\") pod \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\" (UID: \"dc868497-4c53-4e0b-9e06-c1b55bb777d2\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.149241 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh" (OuterVolumeSpecName: "kube-api-access-7r9hh") pod "dc868497-4c53-4e0b-9e06-c1b55bb777d2" (UID: "dc868497-4c53-4e0b-9e06-c1b55bb777d2"). InnerVolumeSpecName "kube-api-access-7r9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.164896 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc868497-4c53-4e0b-9e06-c1b55bb777d2" (UID: "dc868497-4c53-4e0b-9e06-c1b55bb777d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.205983 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc868497-4c53-4e0b-9e06-c1b55bb777d2" (UID: "dc868497-4c53-4e0b-9e06-c1b55bb777d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.245693 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r9hh\" (UniqueName: \"kubernetes.io/projected/dc868497-4c53-4e0b-9e06-c1b55bb777d2-kube-api-access-7r9hh\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.245739 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.245772 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:24 crc kubenswrapper[4857]: E0318 14:29:24.265554 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:24 crc kubenswrapper[4857]: E0318 14:29:24.268162 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.274159 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data" (OuterVolumeSpecName: "config-data") pod "dc868497-4c53-4e0b-9e06-c1b55bb777d2" (UID: "dc868497-4c53-4e0b-9e06-c1b55bb777d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: E0318 14:29:24.282932 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:24 crc kubenswrapper[4857]: E0318 14:29:24.283030 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8dbd8fb56-f2qm7" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.348476 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc868497-4c53-4e0b-9e06-c1b55bb777d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.394152 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.404719 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.557436 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnqs\" (UniqueName: \"kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.913594 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914032 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914083 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914858 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom\") pod \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914889 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn6ms\" (UniqueName: \"kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms\") pod \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914912 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.914955 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.915015 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle\") pod \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.915252 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data\") pod \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\" (UID: \"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.915289 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle\") pod \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\" (UID: \"b2f4faf3-64ce-4979-aff0-7eb76f7f5377\") " Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.917687 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.918184 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.925106 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms" (OuterVolumeSpecName: "kube-api-access-dn6ms") pod "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" (UID: "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde"). InnerVolumeSpecName "kube-api-access-dn6ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.929400 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts" (OuterVolumeSpecName: "scripts") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.948970 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" (UID: "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.949228 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs" (OuterVolumeSpecName: "kube-api-access-4hnqs") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "kube-api-access-4hnqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.971040 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b9fd4bc48-dqzj8" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.970908 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b9fd4bc48-dqzj8" event={"ID":"dc868497-4c53-4e0b-9e06-c1b55bb777d2","Type":"ContainerDied","Data":"1cf109cea8afa6eed50d10df458d1cb7724789061f4fc5ab6ec4a034c1d07e7d"} Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.971675 4857 scope.go:117] "RemoveContainer" containerID="73706bfa117cdd94eaf206ec3aa560ee5da32b4867b5d9b836538e9d56f2975b" Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.997540 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2f4faf3-64ce-4979-aff0-7eb76f7f5377","Type":"ContainerDied","Data":"861871f58f004fa01987e8c39cf5ad26377723d8abce5d0fcf028a5e6c96e659"} Mar 18 14:29:24 crc kubenswrapper[4857]: I0318 14:29:24.997674 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.027705 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" (UID: "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.029939 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.029986 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnqs\" (UniqueName: \"kubernetes.io/projected/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-kube-api-access-4hnqs\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.030000 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.030009 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.030017 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn6ms\" (UniqueName: \"kubernetes.io/projected/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-kube-api-access-dn6ms\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.030025 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.030032 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.037062 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.037525 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7464859c55-9455r" event={"ID":"d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde","Type":"ContainerDied","Data":"a6614b24607e03938084cfa6f96f43e00952bee965e8e18cecc1e6037032b9a7"} Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.037654 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7464859c55-9455r" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.044525 4857 scope.go:117] "RemoveContainer" containerID="9a830badb0172585287d5a74e792e12535d6738175fe6bb896183812eb56e9ca" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.048961 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data" (OuterVolumeSpecName: "config-data") pod "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" (UID: "d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.071352 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.083147 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5b9fd4bc48-dqzj8"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.121982 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.132819 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.132850 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.132861 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.141873 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data" (OuterVolumeSpecName: "config-data") pod "b2f4faf3-64ce-4979-aff0-7eb76f7f5377" (UID: "b2f4faf3-64ce-4979-aff0-7eb76f7f5377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.158178 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.158361 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.162265 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.192940 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" path="/var/lib/kubelet/pods/dc868497-4c53-4e0b-9e06-c1b55bb777d2/volumes" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.235462 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2f4faf3-64ce-4979-aff0-7eb76f7f5377-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.267903 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.268253 4857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.269661 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.494473 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.554387 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.583953 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.603882 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7464859c55-9455r"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.614835 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615676 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615705 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615729 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615735 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615765 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615771 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615782 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="proxy-httpd" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615789 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="proxy-httpd" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615807 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="registry-server" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615813 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="registry-server" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615822 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615827 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615834 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25041d81-4986-400d-b6bc-eab23db1550f" containerName="mariadb-database-create" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615841 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="25041d81-4986-400d-b6bc-eab23db1550f" containerName="mariadb-database-create" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615853 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="extract-utilities" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615859 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="extract-utilities" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615882 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615888 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615903 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="sg-core" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615910 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="sg-core" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615927 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-notification-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615933 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-notification-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615947 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="extract-content" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615953 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="extract-content" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.615965 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-central-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.615971 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-central-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616245 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="proxy-httpd" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616268 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616277 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616288 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-notification-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616296 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="25041d81-4986-400d-b6bc-eab23db1550f" containerName="mariadb-database-create" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616304 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616314 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616324 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616335 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="sg-core" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616347 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34cc331-1dee-4d42-8824-d91dbf40e144" containerName="registry-server" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616365 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" containerName="ceilometer-central-agent" Mar 18 14:29:25 crc kubenswrapper[4857]: E0318 14:29:25.616593 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616601 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc868497-4c53-4e0b-9e06-c1b55bb777d2" containerName="heat-api" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.616865 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" containerName="heat-cfnapi" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.618853 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.622786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.623038 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.628515 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.819909 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.819987 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.820658 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.820713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.820764 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.820998 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.821181 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmn9\" (UniqueName: \"kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923074 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923142 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923238 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923265 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923283 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923349 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923405 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmn9\" (UniqueName: \"kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.923744 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.924290 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.947119 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.949952 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.950370 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.950455 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmn9\" (UniqueName: \"kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:25 crc kubenswrapper[4857]: I0318 14:29:25.954549 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " pod="openstack/ceilometer-0" Mar 18 14:29:26 crc kubenswrapper[4857]: I0318 14:29:26.084018 4857 scope.go:117] "RemoveContainer" containerID="f64321db60225e30892564f68ebd7e8290f3adbd0654128e0b032d54359cd1c2" Mar 18 14:29:26 crc kubenswrapper[4857]: I0318 14:29:26.559357 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:29:26 crc kubenswrapper[4857]: I0318 14:29:26.686914 4857 scope.go:117] "RemoveContainer" containerID="1db29361e9749fadfc0b932964ddce8d3e87453c9057773e06b7661b8e13fbe3" Mar 18 14:29:27 crc kubenswrapper[4857]: I0318 14:29:27.188851 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f4faf3-64ce-4979-aff0-7eb76f7f5377" path="/var/lib/kubelet/pods/b2f4faf3-64ce-4979-aff0-7eb76f7f5377/volumes" Mar 18 14:29:27 crc kubenswrapper[4857]: I0318 14:29:27.189666 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde" path="/var/lib/kubelet/pods/d1a34b9b-51dc-45e7-80a5-d7d2b27f4cde/volumes" Mar 18 14:29:27 crc kubenswrapper[4857]: I0318 14:29:27.333221 4857 scope.go:117] "RemoveContainer" containerID="de936398dfad06d25c2900a725d41a3fe1236f429a4963f99fd02fd2821adfac" Mar 18 14:29:27 crc kubenswrapper[4857]: I0318 14:29:27.707425 4857 scope.go:117] "RemoveContainer" containerID="079d2d27256a1caf76fbcb5ed141e88a671177d3a1a4b62c697ccff076b1989e" Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:28.930312 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:29.102072 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerStarted","Data":"8bcd6877b7d4513901f1a3fe6c1541e241ffa3c7b1910ffaec655cc0f55053cd"} Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:31.338572 4857 generic.go:334] "Generic (PLEG): container finished" podID="51ad3374-1103-4d14-a250-0efcbc82abf8" containerID="f0611cf9711a6192a725046c208a601db4a23533488b88f34572aee00e808023" exitCode=0 Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:31.338962 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2bj5j" event={"ID":"51ad3374-1103-4d14-a250-0efcbc82abf8","Type":"ContainerDied","Data":"f0611cf9711a6192a725046c208a601db4a23533488b88f34572aee00e808023"} Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:31.342736 4857 generic.go:334] "Generic (PLEG): container finished" podID="414737ac-b39d-4b54-bd95-2c8448fd22dc" containerID="be924bdfc117537af15108773f109c8ad3095151d27b4d0f9524ac83e25be840" exitCode=0 Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:31.342814 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dt4sv" event={"ID":"414737ac-b39d-4b54-bd95-2c8448fd22dc","Type":"ContainerDied","Data":"be924bdfc117537af15108773f109c8ad3095151d27b4d0f9524ac83e25be840"} Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:34.110701 4857 generic.go:334] "Generic (PLEG): container finished" podID="c09c7b6e-3108-4aab-8597-20e2f835cb63" containerID="6218698e97e32d7301aaaf93042d3693fece60bfe01bbc0a99f2f988998c89ac" exitCode=0 Mar 18 14:29:34 crc kubenswrapper[4857]: I0318 14:29:34.110978 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" event={"ID":"c09c7b6e-3108-4aab-8597-20e2f835cb63","Type":"ContainerDied","Data":"6218698e97e32d7301aaaf93042d3693fece60bfe01bbc0a99f2f988998c89ac"} Mar 18 14:29:34 crc kubenswrapper[4857]: E0318 14:29:34.266858 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:34 crc kubenswrapper[4857]: E0318 14:29:34.268237 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:34 crc kubenswrapper[4857]: E0318 14:29:34.272710 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:29:34 crc kubenswrapper[4857]: E0318 14:29:34.272777 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8dbd8fb56-f2qm7" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.142633 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerStarted","Data":"3e56b66b8c0b729d35d9fa7afd928d3c89ab581c7994587c38d517ff56646d0f"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.145401 4857 generic.go:334] "Generic (PLEG): container finished" podID="4c281c9a-b573-4e11-acfa-f15205eb5f58" containerID="149f1e15e3daf3f8f14ff6ddc3ec387b2e26a80b899edeed9016e44af707ee88" exitCode=0 Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.145464 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" event={"ID":"4c281c9a-b573-4e11-acfa-f15205eb5f58","Type":"ContainerDied","Data":"149f1e15e3daf3f8f14ff6ddc3ec387b2e26a80b899edeed9016e44af707ee88"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.166855 4857 generic.go:334] "Generic (PLEG): container finished" podID="e4c1ef3d-cd3d-405d-a484-af6052f4a291" containerID="808aa0068c3f7277556505fb5d649cf50f221c456cd38a542096ae3a19529c92" exitCode=0 Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.170433 4857 generic.go:334] "Generic (PLEG): container finished" podID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" exitCode=0 Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194299 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dd68-account-create-update-mlwzf" event={"ID":"e4c1ef3d-cd3d-405d-a484-af6052f4a291","Type":"ContainerDied","Data":"808aa0068c3f7277556505fb5d649cf50f221c456cd38a542096ae3a19529c92"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8dbd8fb56-f2qm7" event={"ID":"0e7d3d4f-6574-4453-9838-6433716eb9ba","Type":"ContainerDied","Data":"78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194680 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8dbd8fb56-f2qm7" event={"ID":"0e7d3d4f-6574-4453-9838-6433716eb9ba","Type":"ContainerDied","Data":"05ec60997e13ba5817cb3f1f7f2cdfbaca7da6c4fef303ceed3a03aaa32ee6a4"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194693 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ec60997e13ba5817cb3f1f7f2cdfbaca7da6c4fef303ceed3a03aaa32ee6a4" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194719 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2bj5j" event={"ID":"51ad3374-1103-4d14-a250-0efcbc82abf8","Type":"ContainerDied","Data":"fd271d802bdc7e10c241dbbeeb10b98af5de5f9bed214bab4c2e4514a992b5ff"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194735 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd271d802bdc7e10c241dbbeeb10b98af5de5f9bed214bab4c2e4514a992b5ff" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194748 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dt4sv" event={"ID":"414737ac-b39d-4b54-bd95-2c8448fd22dc","Type":"ContainerDied","Data":"f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b"} Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.194785 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f071ac54d87926cb8b7030ab808af675b51ca1898d4acab09bf8d7633178c47b" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.229883 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.237708 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.266337 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.314500 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts\") pod \"51ad3374-1103-4d14-a250-0efcbc82abf8\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.314960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9pc5\" (UniqueName: \"kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5\") pod \"51ad3374-1103-4d14-a250-0efcbc82abf8\" (UID: \"51ad3374-1103-4d14-a250-0efcbc82abf8\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315038 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts\") pod \"414737ac-b39d-4b54-bd95-2c8448fd22dc\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315068 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle\") pod \"0e7d3d4f-6574-4453-9838-6433716eb9ba\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315097 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data\") pod \"0e7d3d4f-6574-4453-9838-6433716eb9ba\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315157 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom\") pod \"0e7d3d4f-6574-4453-9838-6433716eb9ba\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315272 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bhb\" (UniqueName: \"kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb\") pod \"414737ac-b39d-4b54-bd95-2c8448fd22dc\" (UID: \"414737ac-b39d-4b54-bd95-2c8448fd22dc\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.315311 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4q5f\" (UniqueName: \"kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f\") pod \"0e7d3d4f-6574-4453-9838-6433716eb9ba\" (UID: \"0e7d3d4f-6574-4453-9838-6433716eb9ba\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.316082 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "414737ac-b39d-4b54-bd95-2c8448fd22dc" (UID: "414737ac-b39d-4b54-bd95-2c8448fd22dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.316111 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51ad3374-1103-4d14-a250-0efcbc82abf8" (UID: "51ad3374-1103-4d14-a250-0efcbc82abf8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.318018 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/414737ac-b39d-4b54-bd95-2c8448fd22dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.318047 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad3374-1103-4d14-a250-0efcbc82abf8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.322582 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f" (OuterVolumeSpecName: "kube-api-access-b4q5f") pod "0e7d3d4f-6574-4453-9838-6433716eb9ba" (UID: "0e7d3d4f-6574-4453-9838-6433716eb9ba"). InnerVolumeSpecName "kube-api-access-b4q5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.326598 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb" (OuterVolumeSpecName: "kube-api-access-x7bhb") pod "414737ac-b39d-4b54-bd95-2c8448fd22dc" (UID: "414737ac-b39d-4b54-bd95-2c8448fd22dc"). InnerVolumeSpecName "kube-api-access-x7bhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.327533 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e7d3d4f-6574-4453-9838-6433716eb9ba" (UID: "0e7d3d4f-6574-4453-9838-6433716eb9ba"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.331904 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5" (OuterVolumeSpecName: "kube-api-access-p9pc5") pod "51ad3374-1103-4d14-a250-0efcbc82abf8" (UID: "51ad3374-1103-4d14-a250-0efcbc82abf8"). InnerVolumeSpecName "kube-api-access-p9pc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.378965 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e7d3d4f-6574-4453-9838-6433716eb9ba" (UID: "0e7d3d4f-6574-4453-9838-6433716eb9ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.399201 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data" (OuterVolumeSpecName: "config-data") pod "0e7d3d4f-6574-4453-9838-6433716eb9ba" (UID: "0e7d3d4f-6574-4453-9838-6433716eb9ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421771 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bhb\" (UniqueName: \"kubernetes.io/projected/414737ac-b39d-4b54-bd95-2c8448fd22dc-kube-api-access-x7bhb\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421805 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4q5f\" (UniqueName: \"kubernetes.io/projected/0e7d3d4f-6574-4453-9838-6433716eb9ba-kube-api-access-b4q5f\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421816 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9pc5\" (UniqueName: \"kubernetes.io/projected/51ad3374-1103-4d14-a250-0efcbc82abf8-kube-api-access-p9pc5\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421828 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421839 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.421849 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e7d3d4f-6574-4453-9838-6433716eb9ba-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.687312 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.730685 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8htgk\" (UniqueName: \"kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk\") pod \"c09c7b6e-3108-4aab-8597-20e2f835cb63\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.730922 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts\") pod \"c09c7b6e-3108-4aab-8597-20e2f835cb63\" (UID: \"c09c7b6e-3108-4aab-8597-20e2f835cb63\") " Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.731875 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c09c7b6e-3108-4aab-8597-20e2f835cb63" (UID: "c09c7b6e-3108-4aab-8597-20e2f835cb63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.735987 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk" (OuterVolumeSpecName: "kube-api-access-8htgk") pod "c09c7b6e-3108-4aab-8597-20e2f835cb63" (UID: "c09c7b6e-3108-4aab-8597-20e2f835cb63"). InnerVolumeSpecName "kube-api-access-8htgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.835029 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c09c7b6e-3108-4aab-8597-20e2f835cb63-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:35 crc kubenswrapper[4857]: I0318 14:29:35.835071 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8htgk\" (UniqueName: \"kubernetes.io/projected/c09c7b6e-3108-4aab-8597-20e2f835cb63-kube-api-access-8htgk\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.763441 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerStarted","Data":"2e4fb3f8ef3006c061a625ff5a9f63356c617c9d3f95f8f5a74c48fbbda0d8fc"} Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.764316 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerStarted","Data":"391b56c5db2ec1f7ec9720f61219a1f85b24f08b1a3a2c837709caa37e1356f1"} Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.767700 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dt4sv" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.776107 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2bj5j" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.776821 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.780673 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cdc0-account-create-update-9lftd" event={"ID":"c09c7b6e-3108-4aab-8597-20e2f835cb63","Type":"ContainerDied","Data":"055405a7694e90e6d0711df33d556611b9d475199e7fd25c07364efd602813c2"} Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.780733 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="055405a7694e90e6d0711df33d556611b9d475199e7fd25c07364efd602813c2" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.781824 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8dbd8fb56-f2qm7" Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.901570 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:29:36 crc kubenswrapper[4857]: I0318 14:29:36.920496 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-8dbd8fb56-f2qm7"] Mar 18 14:29:37 crc kubenswrapper[4857]: I0318 14:29:37.195719 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:29:37 crc kubenswrapper[4857]: E0318 14:29:37.196082 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:29:37 crc kubenswrapper[4857]: I0318 14:29:37.223045 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" path="/var/lib/kubelet/pods/0e7d3d4f-6574-4453-9838-6433716eb9ba/volumes" Mar 18 14:29:37 crc kubenswrapper[4857]: I0318 14:29:37.328195 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.317096 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.370461 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.370479 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-056e-account-create-update-lsd7h" event={"ID":"4c281c9a-b573-4e11-acfa-f15205eb5f58","Type":"ContainerDied","Data":"c25ffe9d862d777bea095b1159c4a2d8cd3482c861cd734686eea63e2451f024"} Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.371283 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25ffe9d862d777bea095b1159c4a2d8cd3482c861cd734686eea63e2451f024" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.374632 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dd68-account-create-update-mlwzf" event={"ID":"e4c1ef3d-cd3d-405d-a484-af6052f4a291","Type":"ContainerDied","Data":"4592ce0f34be34807d5593c749734a63fb06a102e07c1e8708ff4571f6c57664"} Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.374675 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4592ce0f34be34807d5593c749734a63fb06a102e07c1e8708ff4571f6c57664" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.374722 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dd68-account-create-update-mlwzf" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.825856 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts\") pod \"4c281c9a-b573-4e11-acfa-f15205eb5f58\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.826076 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh8sk\" (UniqueName: \"kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk\") pod \"4c281c9a-b573-4e11-acfa-f15205eb5f58\" (UID: \"4c281c9a-b573-4e11-acfa-f15205eb5f58\") " Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.840330 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c281c9a-b573-4e11-acfa-f15205eb5f58" (UID: "4c281c9a-b573-4e11-acfa-f15205eb5f58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.919090 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk" (OuterVolumeSpecName: "kube-api-access-qh8sk") pod "4c281c9a-b573-4e11-acfa-f15205eb5f58" (UID: "4c281c9a-b573-4e11-acfa-f15205eb5f58"). InnerVolumeSpecName "kube-api-access-qh8sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.932178 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpr7w\" (UniqueName: \"kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w\") pod \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.933604 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts\") pod \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\" (UID: \"e4c1ef3d-cd3d-405d-a484-af6052f4a291\") " Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.934953 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4c1ef3d-cd3d-405d-a484-af6052f4a291" (UID: "e4c1ef3d-cd3d-405d-a484-af6052f4a291"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.942714 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c281c9a-b573-4e11-acfa-f15205eb5f58-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.943182 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4c1ef3d-cd3d-405d-a484-af6052f4a291-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.943208 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh8sk\" (UniqueName: \"kubernetes.io/projected/4c281c9a-b573-4e11-acfa-f15205eb5f58-kube-api-access-qh8sk\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:38 crc kubenswrapper[4857]: I0318 14:29:38.957226 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w" (OuterVolumeSpecName: "kube-api-access-lpr7w") pod "e4c1ef3d-cd3d-405d-a484-af6052f4a291" (UID: "e4c1ef3d-cd3d-405d-a484-af6052f4a291"). InnerVolumeSpecName "kube-api-access-lpr7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:29:39 crc kubenswrapper[4857]: I0318 14:29:39.044894 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpr7w\" (UniqueName: \"kubernetes.io/projected/e4c1ef3d-cd3d-405d-a484-af6052f4a291-kube-api-access-lpr7w\") on node \"crc\" DevicePath \"\"" Mar 18 14:29:41 crc kubenswrapper[4857]: I0318 14:29:41.992862 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerStarted","Data":"98977c71e421c647fcd9d716ea95499a4d0f85acbe72231e6dd10eb0938ffaba"} Mar 18 14:29:41 crc kubenswrapper[4857]: I0318 14:29:41.993711 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:29:42 crc kubenswrapper[4857]: I0318 14:29:42.036806 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.529010211 podStartE2EDuration="17.036772671s" podCreationTimestamp="2026-03-18 14:29:25 +0000 UTC" firstStartedPulling="2026-03-18 14:29:28.939226975 +0000 UTC m=+1753.068355432" lastFinishedPulling="2026-03-18 14:29:40.446989435 +0000 UTC m=+1764.576117892" observedRunningTime="2026-03-18 14:29:42.016292666 +0000 UTC m=+1766.145421133" watchObservedRunningTime="2026-03-18 14:29:42.036772671 +0000 UTC m=+1766.165901128" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.980127 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.983901 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c1ef3d-cd3d-405d-a484-af6052f4a291" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.984258 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c1ef3d-cd3d-405d-a484-af6052f4a291" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.984385 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c281c9a-b573-4e11-acfa-f15205eb5f58" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.984486 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c281c9a-b573-4e11-acfa-f15205eb5f58" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.984602 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414737ac-b39d-4b54-bd95-2c8448fd22dc" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.984678 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="414737ac-b39d-4b54-bd95-2c8448fd22dc" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.984785 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09c7b6e-3108-4aab-8597-20e2f835cb63" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.984870 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09c7b6e-3108-4aab-8597-20e2f835cb63" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.985000 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ad3374-1103-4d14-a250-0efcbc82abf8" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.985102 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ad3374-1103-4d14-a250-0efcbc82abf8" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: E0318 14:29:43.985210 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.985289 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.985856 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="414737ac-b39d-4b54-bd95-2c8448fd22dc" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.985963 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c1ef3d-cd3d-405d-a484-af6052f4a291" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.986087 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c281c9a-b573-4e11-acfa-f15205eb5f58" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.986206 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7d3d4f-6574-4453-9838-6433716eb9ba" containerName="heat-engine" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.986292 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09c7b6e-3108-4aab-8597-20e2f835cb63" containerName="mariadb-account-create-update" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.986388 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ad3374-1103-4d14-a250-0efcbc82abf8" containerName="mariadb-database-create" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.992723 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:43 crc kubenswrapper[4857]: I0318 14:29:43.996850 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.045105 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2tz\" (UniqueName: \"kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.045336 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.045559 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.148307 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2tz\" (UniqueName: \"kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.148372 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.148467 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.149169 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.149293 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.169728 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2tz\" (UniqueName: \"kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz\") pod \"certified-operators-xcd9f\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.323379 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.895841 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h7rnb"] Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.898660 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.902250 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fqdnm" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.902598 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.902727 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 18 14:29:44 crc kubenswrapper[4857]: I0318 14:29:44.924398 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h7rnb"] Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.022419 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.022547 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzg8b\" (UniqueName: \"kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.022607 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.022857 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.124952 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.125024 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.125199 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.125259 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzg8b\" (UniqueName: \"kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.132133 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.132361 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.137398 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.149438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzg8b\" (UniqueName: \"kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b\") pod \"nova-cell0-conductor-db-sync-h7rnb\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.239979 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.307992 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.785836 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h7rnb"] Mar 18 14:29:45 crc kubenswrapper[4857]: W0318 14:29:45.814204 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda48c014c_1b70_4d0c_b01b_9c1060620b0e.slice/crio-7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89 WatchSource:0}: Error finding container 7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89: Status 404 returned error can't find the container with id 7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89 Mar 18 14:29:45 crc kubenswrapper[4857]: I0318 14:29:45.817398 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.241795 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.242404 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-central-agent" containerID="cri-o://3e56b66b8c0b729d35d9fa7afd928d3c89ab581c7994587c38d517ff56646d0f" gracePeriod=30 Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.242587 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="proxy-httpd" containerID="cri-o://98977c71e421c647fcd9d716ea95499a4d0f85acbe72231e6dd10eb0938ffaba" gracePeriod=30 Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.242654 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-notification-agent" containerID="cri-o://391b56c5db2ec1f7ec9720f61219a1f85b24f08b1a3a2c837709caa37e1356f1" gracePeriod=30 Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.242844 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="sg-core" containerID="cri-o://2e4fb3f8ef3006c061a625ff5a9f63356c617c9d3f95f8f5a74c48fbbda0d8fc" gracePeriod=30 Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.293970 4857 generic.go:334] "Generic (PLEG): container finished" podID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerID="dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5" exitCode=0 Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.294077 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerDied","Data":"dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5"} Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.294117 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerStarted","Data":"6acc9dbfdb8dabe0332eb41aa334a8f440b2048b0c8bbb0c72beb878cd34cc19"} Mar 18 14:29:46 crc kubenswrapper[4857]: I0318 14:29:46.298648 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" event={"ID":"a48c014c-1b70-4d0c-b01b-9c1060620b0e","Type":"ContainerStarted","Data":"7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89"} Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.325892 4857 generic.go:334] "Generic (PLEG): container finished" podID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerID="98977c71e421c647fcd9d716ea95499a4d0f85acbe72231e6dd10eb0938ffaba" exitCode=0 Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.326258 4857 generic.go:334] "Generic (PLEG): container finished" podID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerID="2e4fb3f8ef3006c061a625ff5a9f63356c617c9d3f95f8f5a74c48fbbda0d8fc" exitCode=2 Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.326268 4857 generic.go:334] "Generic (PLEG): container finished" podID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerID="391b56c5db2ec1f7ec9720f61219a1f85b24f08b1a3a2c837709caa37e1356f1" exitCode=0 Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.326300 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerDied","Data":"98977c71e421c647fcd9d716ea95499a4d0f85acbe72231e6dd10eb0938ffaba"} Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.326334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerDied","Data":"2e4fb3f8ef3006c061a625ff5a9f63356c617c9d3f95f8f5a74c48fbbda0d8fc"} Mar 18 14:29:47 crc kubenswrapper[4857]: I0318 14:29:47.326345 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerDied","Data":"391b56c5db2ec1f7ec9720f61219a1f85b24f08b1a3a2c837709caa37e1356f1"} Mar 18 14:29:49 crc kubenswrapper[4857]: I0318 14:29:49.163809 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:29:49 crc kubenswrapper[4857]: E0318 14:29:49.164700 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:29:49 crc kubenswrapper[4857]: I0318 14:29:49.365411 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerStarted","Data":"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9"} Mar 18 14:29:55 crc kubenswrapper[4857]: I0318 14:29:55.463108 4857 generic.go:334] "Generic (PLEG): container finished" podID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerID="2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9" exitCode=0 Mar 18 14:29:55 crc kubenswrapper[4857]: I0318 14:29:55.463184 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerDied","Data":"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9"} Mar 18 14:29:56 crc kubenswrapper[4857]: I0318 14:29:56.564710 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": dial tcp 10.217.0.242:3000: connect: connection refused" Mar 18 14:29:57 crc kubenswrapper[4857]: I0318 14:29:57.495256 4857 generic.go:334] "Generic (PLEG): container finished" podID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerID="3e56b66b8c0b729d35d9fa7afd928d3c89ab581c7994587c38d517ff56646d0f" exitCode=0 Mar 18 14:29:57 crc kubenswrapper[4857]: I0318 14:29:57.495321 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerDied","Data":"3e56b66b8c0b729d35d9fa7afd928d3c89ab581c7994587c38d517ff56646d0f"} Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.156675 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564070-fjwnr"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.159790 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.165925 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.166613 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.166748 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.179839 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564070-fjwnr"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.221614 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj7f\" (UniqueName: \"kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f\") pod \"auto-csr-approver-29564070-fjwnr\" (UID: \"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730\") " pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.265021 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.267308 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.269950 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.270286 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.282777 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.325605 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.325827 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj7f\" (UniqueName: \"kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f\") pod \"auto-csr-approver-29564070-fjwnr\" (UID: \"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730\") " pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.326022 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.326979 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwb5\" (UniqueName: \"kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.345700 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj7f\" (UniqueName: \"kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f\") pod \"auto-csr-approver-29564070-fjwnr\" (UID: \"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730\") " pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.404441 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.430233 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.430411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwb5\" (UniqueName: \"kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.430684 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.433606 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.442110 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.470977 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwb5\" (UniqueName: \"kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5\") pod \"collect-profiles-29564070-868gr\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.503696 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533088 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533370 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmn9\" (UniqueName: \"kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533435 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533479 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533519 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533616 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.533634 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml\") pod \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\" (UID: \"cc0a750e-88e6-4743-bd8e-4deb0f9e121f\") " Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.534956 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.535588 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.539860 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts" (OuterVolumeSpecName: "scripts") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.543925 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9" (OuterVolumeSpecName: "kube-api-access-mpmn9") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "kube-api-access-mpmn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.548048 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc0a750e-88e6-4743-bd8e-4deb0f9e121f","Type":"ContainerDied","Data":"8bcd6877b7d4513901f1a3fe6c1541e241ffa3c7b1910ffaec655cc0f55053cd"} Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.548127 4857 scope.go:117] "RemoveContainer" containerID="98977c71e421c647fcd9d716ea95499a4d0f85acbe72231e6dd10eb0938ffaba" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.548409 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.553691 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" event={"ID":"a48c014c-1b70-4d0c-b01b-9c1060620b0e","Type":"ContainerStarted","Data":"3f63307fe76eb7aff1a9297411edb40614fbf2e0d0c93a40fb6eb2bf63d20c99"} Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.560477 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerStarted","Data":"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49"} Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.582985 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" podStartSLOduration=2.411118449 podStartE2EDuration="16.582954689s" podCreationTimestamp="2026-03-18 14:29:44 +0000 UTC" firstStartedPulling="2026-03-18 14:29:45.817103102 +0000 UTC m=+1769.946231549" lastFinishedPulling="2026-03-18 14:29:59.988939332 +0000 UTC m=+1784.118067789" observedRunningTime="2026-03-18 14:30:00.571347417 +0000 UTC m=+1784.700475894" watchObservedRunningTime="2026-03-18 14:30:00.582954689 +0000 UTC m=+1784.712083146" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.597883 4857 scope.go:117] "RemoveContainer" containerID="2e4fb3f8ef3006c061a625ff5a9f63356c617c9d3f95f8f5a74c48fbbda0d8fc" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.605707 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcd9f" podStartSLOduration=3.591101029 podStartE2EDuration="17.605672661s" podCreationTimestamp="2026-03-18 14:29:43 +0000 UTC" firstStartedPulling="2026-03-18 14:29:46.299149582 +0000 UTC m=+1770.428278039" lastFinishedPulling="2026-03-18 14:30:00.313721224 +0000 UTC m=+1784.442849671" observedRunningTime="2026-03-18 14:30:00.594850129 +0000 UTC m=+1784.723978586" watchObservedRunningTime="2026-03-18 14:30:00.605672661 +0000 UTC m=+1784.734801118" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.633118 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.638094 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.638137 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpmn9\" (UniqueName: \"kubernetes.io/projected/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-kube-api-access-mpmn9\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.638161 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.638180 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.638199 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.660165 4857 scope.go:117] "RemoveContainer" containerID="391b56c5db2ec1f7ec9720f61219a1f85b24f08b1a3a2c837709caa37e1356f1" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.673189 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.683051 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.693010 4857 scope.go:117] "RemoveContainer" containerID="3e56b66b8c0b729d35d9fa7afd928d3c89ab581c7994587c38d517ff56646d0f" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.726713 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data" (OuterVolumeSpecName: "config-data") pod "cc0a750e-88e6-4743-bd8e-4deb0f9e121f" (UID: "cc0a750e-88e6-4743-bd8e-4deb0f9e121f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.740809 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.740865 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc0a750e-88e6-4743-bd8e-4deb0f9e121f-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.918711 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.962807 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:00 crc kubenswrapper[4857]: I0318 14:30:00.992824 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:01 crc kubenswrapper[4857]: E0318 14:30:01.013535 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-notification-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.013884 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-notification-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: E0318 14:30:01.013928 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="sg-core" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.013939 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="sg-core" Mar 18 14:30:01 crc kubenswrapper[4857]: E0318 14:30:01.013987 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-central-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.013994 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-central-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: E0318 14:30:01.014015 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="proxy-httpd" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.014023 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="proxy-httpd" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.016386 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="proxy-httpd" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.016428 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-notification-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.016454 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="ceilometer-central-agent" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.016472 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" containerName="sg-core" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.074564 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.076955 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.086220 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.091357 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.123161 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564070-fjwnr"] Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168173 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168290 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168431 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrnc\" (UniqueName: \"kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168512 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168708 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.168911 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.169006 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.199502 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0a750e-88e6-4743-bd8e-4deb0f9e121f" path="/var/lib/kubelet/pods/cc0a750e-88e6-4743-bd8e-4deb0f9e121f/volumes" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.246778 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr"] Mar 18 14:30:01 crc kubenswrapper[4857]: W0318 14:30:01.251977 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f42948a_83fe_49d9_85a5_8a9c14e87b71.slice/crio-7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a WatchSource:0}: Error finding container 7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a: Status 404 returned error can't find the container with id 7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272109 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272299 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272443 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272569 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272647 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272741 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.272852 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrnc\" (UniqueName: \"kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.275824 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.279176 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.279926 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.281358 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.281709 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.281988 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.298662 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrnc\" (UniqueName: \"kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc\") pod \"ceilometer-0\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.421341 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.607735 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" event={"ID":"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730","Type":"ContainerStarted","Data":"6606a8a60a739ac270a77bc194ebce3e61614acb18522aa5aae031a29c980b88"} Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.619397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" event={"ID":"3f42948a-83fe-49d9-85a5-8a9c14e87b71","Type":"ContainerStarted","Data":"46e0eca70d959e891cd19a9591a50c82e0d3d19247883022ff70420c81089597"} Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.619448 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" event={"ID":"3f42948a-83fe-49d9-85a5-8a9c14e87b71","Type":"ContainerStarted","Data":"7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a"} Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.647578 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" podStartSLOduration=1.64754946 podStartE2EDuration="1.64754946s" podCreationTimestamp="2026-03-18 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:01.640528793 +0000 UTC m=+1785.769657250" watchObservedRunningTime="2026-03-18 14:30:01.64754946 +0000 UTC m=+1785.776677917" Mar 18 14:30:01 crc kubenswrapper[4857]: W0318 14:30:01.994588 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277feb07_ef83_422f_832e_7d37be471a7a.slice/crio-67588061f16a2c22a0a61972754d8a1ee73456a819ba9490dbb90fb5822b5d5a WatchSource:0}: Error finding container 67588061f16a2c22a0a61972754d8a1ee73456a819ba9490dbb90fb5822b5d5a: Status 404 returned error can't find the container with id 67588061f16a2c22a0a61972754d8a1ee73456a819ba9490dbb90fb5822b5d5a Mar 18 14:30:01 crc kubenswrapper[4857]: I0318 14:30:01.997545 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:02 crc kubenswrapper[4857]: I0318 14:30:02.637508 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerStarted","Data":"67588061f16a2c22a0a61972754d8a1ee73456a819ba9490dbb90fb5822b5d5a"} Mar 18 14:30:02 crc kubenswrapper[4857]: I0318 14:30:02.642281 4857 generic.go:334] "Generic (PLEG): container finished" podID="3f42948a-83fe-49d9-85a5-8a9c14e87b71" containerID="46e0eca70d959e891cd19a9591a50c82e0d3d19247883022ff70420c81089597" exitCode=0 Mar 18 14:30:02 crc kubenswrapper[4857]: I0318 14:30:02.642330 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" event={"ID":"3f42948a-83fe-49d9-85a5-8a9c14e87b71","Type":"ContainerDied","Data":"46e0eca70d959e891cd19a9591a50c82e0d3d19247883022ff70420c81089597"} Mar 18 14:30:03 crc kubenswrapper[4857]: I0318 14:30:03.168435 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:30:03 crc kubenswrapper[4857]: E0318 14:30:03.172886 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:30:03 crc kubenswrapper[4857]: I0318 14:30:03.504980 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:03 crc kubenswrapper[4857]: I0318 14:30:03.687870 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerStarted","Data":"5104c7401c284a18bf8ca3655b3b6abe7f1f825f3f21d428ae2bf299d7b12655"} Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.179048 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.285202 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume\") pod \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.285423 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume\") pod \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.285719 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwb5\" (UniqueName: \"kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5\") pod \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\" (UID: \"3f42948a-83fe-49d9-85a5-8a9c14e87b71\") " Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.288376 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f42948a-83fe-49d9-85a5-8a9c14e87b71" (UID: "3f42948a-83fe-49d9-85a5-8a9c14e87b71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.288894 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f42948a-83fe-49d9-85a5-8a9c14e87b71-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.295733 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f42948a-83fe-49d9-85a5-8a9c14e87b71" (UID: "3f42948a-83fe-49d9-85a5-8a9c14e87b71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.296068 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5" (OuterVolumeSpecName: "kube-api-access-zkwb5") pod "3f42948a-83fe-49d9-85a5-8a9c14e87b71" (UID: "3f42948a-83fe-49d9-85a5-8a9c14e87b71"). InnerVolumeSpecName "kube-api-access-zkwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.324183 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.325912 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.391046 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwb5\" (UniqueName: \"kubernetes.io/projected/3f42948a-83fe-49d9-85a5-8a9c14e87b71-kube-api-access-zkwb5\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.391083 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f42948a-83fe-49d9-85a5-8a9c14e87b71-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.393368 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.702615 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" event={"ID":"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730","Type":"ContainerStarted","Data":"c7e5c9a676d0f25057307499b76119a8b8c577f909839ceafb63b39524d31878"} Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.705307 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerStarted","Data":"066fd5f6a271ffc4e875b4c1310166e95380cc780fbe524787af7d25562be13c"} Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.708351 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.709829 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr" event={"ID":"3f42948a-83fe-49d9-85a5-8a9c14e87b71","Type":"ContainerDied","Data":"7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a"} Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.709963 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ddc32be15ff5cf3d351cab311f35d0ff554f112150b769f7dc91e53376b963a" Mar 18 14:30:04 crc kubenswrapper[4857]: I0318 14:30:04.728032 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" podStartSLOduration=2.613273381 podStartE2EDuration="4.728001008s" podCreationTimestamp="2026-03-18 14:30:00 +0000 UTC" firstStartedPulling="2026-03-18 14:30:01.074581661 +0000 UTC m=+1785.203710118" lastFinishedPulling="2026-03-18 14:30:03.189309288 +0000 UTC m=+1787.318437745" observedRunningTime="2026-03-18 14:30:04.723287199 +0000 UTC m=+1788.852415656" watchObservedRunningTime="2026-03-18 14:30:04.728001008 +0000 UTC m=+1788.857129465" Mar 18 14:30:05 crc kubenswrapper[4857]: I0318 14:30:05.796195 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:05 crc kubenswrapper[4857]: I0318 14:30:05.879130 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:30:06 crc kubenswrapper[4857]: I0318 14:30:06.744831 4857 generic.go:334] "Generic (PLEG): container finished" podID="3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" containerID="c7e5c9a676d0f25057307499b76119a8b8c577f909839ceafb63b39524d31878" exitCode=0 Mar 18 14:30:06 crc kubenswrapper[4857]: I0318 14:30:06.745396 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" event={"ID":"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730","Type":"ContainerDied","Data":"c7e5c9a676d0f25057307499b76119a8b8c577f909839ceafb63b39524d31878"} Mar 18 14:30:06 crc kubenswrapper[4857]: I0318 14:30:06.750581 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerStarted","Data":"88086f9e79e46968e5ae3adff2ce8e1302366cbd4b8cae8039320d7853f21f02"} Mar 18 14:30:07 crc kubenswrapper[4857]: I0318 14:30:07.762062 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcd9f" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="registry-server" containerID="cri-o://f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49" gracePeriod=2 Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.186680 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.323380 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj7f\" (UniqueName: \"kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f\") pod \"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730\" (UID: \"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730\") " Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.337324 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f" (OuterVolumeSpecName: "kube-api-access-5cj7f") pod "3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" (UID: "3a8c4d1a-70c4-46f5-9a60-742ac9bfb730"). InnerVolumeSpecName "kube-api-access-5cj7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.433468 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj7f\" (UniqueName: \"kubernetes.io/projected/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730-kube-api-access-5cj7f\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.495967 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.638024 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities\") pod \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.638233 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content\") pod \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.638299 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq2tz\" (UniqueName: \"kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz\") pod \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\" (UID: \"aae08880-d5d8-45c0-81d4-70ccb59e4f27\") " Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.640686 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities" (OuterVolumeSpecName: "utilities") pod "aae08880-d5d8-45c0-81d4-70ccb59e4f27" (UID: "aae08880-d5d8-45c0-81d4-70ccb59e4f27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.641517 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.651743 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz" (OuterVolumeSpecName: "kube-api-access-mq2tz") pod "aae08880-d5d8-45c0-81d4-70ccb59e4f27" (UID: "aae08880-d5d8-45c0-81d4-70ccb59e4f27"). InnerVolumeSpecName "kube-api-access-mq2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.714047 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aae08880-d5d8-45c0-81d4-70ccb59e4f27" (UID: "aae08880-d5d8-45c0-81d4-70ccb59e4f27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.745037 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aae08880-d5d8-45c0-81d4-70ccb59e4f27-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.745093 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq2tz\" (UniqueName: \"kubernetes.io/projected/aae08880-d5d8-45c0-81d4-70ccb59e4f27-kube-api-access-mq2tz\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.814406 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" event={"ID":"3a8c4d1a-70c4-46f5-9a60-742ac9bfb730","Type":"ContainerDied","Data":"6606a8a60a739ac270a77bc194ebce3e61614acb18522aa5aae031a29c980b88"} Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.814472 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6606a8a60a739ac270a77bc194ebce3e61614acb18522aa5aae031a29c980b88" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.814471 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564070-fjwnr" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.817083 4857 generic.go:334] "Generic (PLEG): container finished" podID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerID="f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49" exitCode=0 Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.817139 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerDied","Data":"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49"} Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.817174 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcd9f" event={"ID":"aae08880-d5d8-45c0-81d4-70ccb59e4f27","Type":"ContainerDied","Data":"6acc9dbfdb8dabe0332eb41aa334a8f440b2048b0c8bbb0c72beb878cd34cc19"} Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.817195 4857 scope.go:117] "RemoveContainer" containerID="f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.817361 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcd9f" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.851973 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564064-jhjx7"] Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.863931 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564064-jhjx7"] Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.896676 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.910299 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcd9f"] Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.916941 4857 scope.go:117] "RemoveContainer" containerID="2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9" Mar 18 14:30:08 crc kubenswrapper[4857]: I0318 14:30:08.994658 4857 scope.go:117] "RemoveContainer" containerID="dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.058203 4857 scope.go:117] "RemoveContainer" containerID="f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49" Mar 18 14:30:09 crc kubenswrapper[4857]: E0318 14:30:09.058741 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49\": container with ID starting with f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49 not found: ID does not exist" containerID="f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.058791 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49"} err="failed to get container status \"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49\": rpc error: code = NotFound desc = could not find container \"f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49\": container with ID starting with f16e540ddf7ab3b2a8de66946120568aa8e694721b8db7480b53d10996690c49 not found: ID does not exist" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.058820 4857 scope.go:117] "RemoveContainer" containerID="2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9" Mar 18 14:30:09 crc kubenswrapper[4857]: E0318 14:30:09.063014 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9\": container with ID starting with 2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9 not found: ID does not exist" containerID="2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.063065 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9"} err="failed to get container status \"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9\": rpc error: code = NotFound desc = could not find container \"2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9\": container with ID starting with 2b38a486109276a2e2261ddaf3d0dcc4b701622f372cfaaa8d51ee8b75f113e9 not found: ID does not exist" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.063098 4857 scope.go:117] "RemoveContainer" containerID="dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5" Mar 18 14:30:09 crc kubenswrapper[4857]: E0318 14:30:09.066156 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5\": container with ID starting with dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5 not found: ID does not exist" containerID="dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.066193 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5"} err="failed to get container status \"dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5\": rpc error: code = NotFound desc = could not find container \"dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5\": container with ID starting with dbd22a3693ffbfca1e3d599d6753b9f5715b2925dbb9060eba02557821fbf9c5 not found: ID does not exist" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.180173 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" path="/var/lib/kubelet/pods/aae08880-d5d8-45c0-81d4-70ccb59e4f27/volumes" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.193141 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb34e902-9484-4d17-97ab-77985e7714e4" path="/var/lib/kubelet/pods/eb34e902-9484-4d17-97ab-77985e7714e4/volumes" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.838675 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerStarted","Data":"114b49028a1877335e0e30a2fc211c9ece0100ef2b9df95de0e0bde33851176f"} Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.838860 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-central-agent" containerID="cri-o://5104c7401c284a18bf8ca3655b3b6abe7f1f825f3f21d428ae2bf299d7b12655" gracePeriod=30 Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.839115 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-notification-agent" containerID="cri-o://066fd5f6a271ffc4e875b4c1310166e95380cc780fbe524787af7d25562be13c" gracePeriod=30 Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.839135 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="proxy-httpd" containerID="cri-o://114b49028a1877335e0e30a2fc211c9ece0100ef2b9df95de0e0bde33851176f" gracePeriod=30 Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.839154 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="sg-core" containerID="cri-o://88086f9e79e46968e5ae3adff2ce8e1302366cbd4b8cae8039320d7853f21f02" gracePeriod=30 Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.839535 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:30:09 crc kubenswrapper[4857]: I0318 14:30:09.883588 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.052695151 podStartE2EDuration="9.883563537s" podCreationTimestamp="2026-03-18 14:30:00 +0000 UTC" firstStartedPulling="2026-03-18 14:30:01.997082956 +0000 UTC m=+1786.126211413" lastFinishedPulling="2026-03-18 14:30:08.827951342 +0000 UTC m=+1792.957079799" observedRunningTime="2026-03-18 14:30:09.870849957 +0000 UTC m=+1793.999978414" watchObservedRunningTime="2026-03-18 14:30:09.883563537 +0000 UTC m=+1794.012691984" Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855036 4857 generic.go:334] "Generic (PLEG): container finished" podID="277feb07-ef83-422f-832e-7d37be471a7a" containerID="114b49028a1877335e0e30a2fc211c9ece0100ef2b9df95de0e0bde33851176f" exitCode=0 Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855391 4857 generic.go:334] "Generic (PLEG): container finished" podID="277feb07-ef83-422f-832e-7d37be471a7a" containerID="88086f9e79e46968e5ae3adff2ce8e1302366cbd4b8cae8039320d7853f21f02" exitCode=2 Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855406 4857 generic.go:334] "Generic (PLEG): container finished" podID="277feb07-ef83-422f-832e-7d37be471a7a" containerID="066fd5f6a271ffc4e875b4c1310166e95380cc780fbe524787af7d25562be13c" exitCode=0 Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855128 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerDied","Data":"114b49028a1877335e0e30a2fc211c9ece0100ef2b9df95de0e0bde33851176f"} Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855459 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerDied","Data":"88086f9e79e46968e5ae3adff2ce8e1302366cbd4b8cae8039320d7853f21f02"} Mar 18 14:30:10 crc kubenswrapper[4857]: I0318 14:30:10.855481 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerDied","Data":"066fd5f6a271ffc4e875b4c1310166e95380cc780fbe524787af7d25562be13c"} Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.515443 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-v4f6x"] Mar 18 14:30:15 crc kubenswrapper[4857]: E0318 14:30:15.516682 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="registry-server" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.516728 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="registry-server" Mar 18 14:30:15 crc kubenswrapper[4857]: E0318 14:30:15.516779 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="extract-utilities" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.516792 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="extract-utilities" Mar 18 14:30:15 crc kubenswrapper[4857]: E0318 14:30:15.516826 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="extract-content" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.516837 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="extract-content" Mar 18 14:30:15 crc kubenswrapper[4857]: E0318 14:30:15.516877 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" containerName="oc" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.516887 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" containerName="oc" Mar 18 14:30:15 crc kubenswrapper[4857]: E0318 14:30:15.516937 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f42948a-83fe-49d9-85a5-8a9c14e87b71" containerName="collect-profiles" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.516947 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f42948a-83fe-49d9-85a5-8a9c14e87b71" containerName="collect-profiles" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.517293 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" containerName="oc" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.517348 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f42948a-83fe-49d9-85a5-8a9c14e87b71" containerName="collect-profiles" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.517384 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae08880-d5d8-45c0-81d4-70ccb59e4f27" containerName="registry-server" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.518843 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.533855 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4f6x"] Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.627839 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4cc7-account-create-update-k2fkp"] Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.630319 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.635331 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.639956 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.640409 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4cc7-account-create-update-k2fkp"] Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.640861 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pj54\" (UniqueName: \"kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.742463 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.742558 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4hf\" (UniqueName: \"kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.742773 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pj54\" (UniqueName: \"kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.742843 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.744571 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.763164 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pj54\" (UniqueName: \"kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54\") pod \"aodh-db-create-v4f6x\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.842192 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.845156 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4hf\" (UniqueName: \"kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.845549 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.846454 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.863359 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4hf\" (UniqueName: \"kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf\") pod \"aodh-4cc7-account-create-update-k2fkp\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:15 crc kubenswrapper[4857]: I0318 14:30:15.953523 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.451577 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-v4f6x"] Mar 18 14:30:16 crc kubenswrapper[4857]: W0318 14:30:16.457002 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf99b0f79_9e9d_469e_90c8_3dbb0a5893fc.slice/crio-3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5 WatchSource:0}: Error finding container 3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5: Status 404 returned error can't find the container with id 3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5 Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.654430 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4cc7-account-create-update-k2fkp"] Mar 18 14:30:16 crc kubenswrapper[4857]: W0318 14:30:16.656289 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e242cf0_f297_425c_8bc7_f6602a60faea.slice/crio-fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291 WatchSource:0}: Error finding container fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291: Status 404 returned error can't find the container with id fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291 Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.976603 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerDied","Data":"5104c7401c284a18bf8ca3655b3b6abe7f1f825f3f21d428ae2bf299d7b12655"} Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.976966 4857 generic.go:334] "Generic (PLEG): container finished" podID="277feb07-ef83-422f-832e-7d37be471a7a" containerID="5104c7401c284a18bf8ca3655b3b6abe7f1f825f3f21d428ae2bf299d7b12655" exitCode=0 Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.987837 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4f6x" event={"ID":"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc","Type":"ContainerStarted","Data":"c31ab01511e35915445a574b0de906d965ac4f6132f2aa90ccbbc015f2e45e79"} Mar 18 14:30:16 crc kubenswrapper[4857]: I0318 14:30:16.987896 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4f6x" event={"ID":"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc","Type":"ContainerStarted","Data":"3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5"} Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:16.994924 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4cc7-account-create-update-k2fkp" event={"ID":"9e242cf0-f297-425c-8bc7-f6602a60faea","Type":"ContainerStarted","Data":"a0f655471d1e1343ce3fc487195d7d991fee99f4c39fb94478d8c4f416cd9e51"} Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:16.994988 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4cc7-account-create-update-k2fkp" event={"ID":"9e242cf0-f297-425c-8bc7-f6602a60faea","Type":"ContainerStarted","Data":"fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291"} Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.008175 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-v4f6x" podStartSLOduration=2.008155005 podStartE2EDuration="2.008155005s" podCreationTimestamp="2026-03-18 14:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:17.002951154 +0000 UTC m=+1801.132079611" watchObservedRunningTime="2026-03-18 14:30:17.008155005 +0000 UTC m=+1801.137283462" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.036029 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-4cc7-account-create-update-k2fkp" podStartSLOduration=2.036000126 podStartE2EDuration="2.036000126s" podCreationTimestamp="2026-03-18 14:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:17.022061595 +0000 UTC m=+1801.151190052" watchObservedRunningTime="2026-03-18 14:30:17.036000126 +0000 UTC m=+1801.165128583" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.190962 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:30:17 crc kubenswrapper[4857]: E0318 14:30:17.191360 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.406409 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.490546 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twrnc\" (UniqueName: \"kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.490612 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.490823 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.490856 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.490999 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.491081 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.491135 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd\") pod \"277feb07-ef83-422f-832e-7d37be471a7a\" (UID: \"277feb07-ef83-422f-832e-7d37be471a7a\") " Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.491450 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.491782 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.492158 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.492197 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/277feb07-ef83-422f-832e-7d37be471a7a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.498206 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts" (OuterVolumeSpecName: "scripts") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.499598 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc" (OuterVolumeSpecName: "kube-api-access-twrnc") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "kube-api-access-twrnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.542077 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.594539 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.594583 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twrnc\" (UniqueName: \"kubernetes.io/projected/277feb07-ef83-422f-832e-7d37be471a7a-kube-api-access-twrnc\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.594611 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.611655 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.634342 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data" (OuterVolumeSpecName: "config-data") pod "277feb07-ef83-422f-832e-7d37be471a7a" (UID: "277feb07-ef83-422f-832e-7d37be471a7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.696837 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:17 crc kubenswrapper[4857]: I0318 14:30:17.696970 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277feb07-ef83-422f-832e-7d37be471a7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.019947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"277feb07-ef83-422f-832e-7d37be471a7a","Type":"ContainerDied","Data":"67588061f16a2c22a0a61972754d8a1ee73456a819ba9490dbb90fb5822b5d5a"} Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.020152 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.020158 4857 scope.go:117] "RemoveContainer" containerID="114b49028a1877335e0e30a2fc211c9ece0100ef2b9df95de0e0bde33851176f" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.070148 4857 scope.go:117] "RemoveContainer" containerID="88086f9e79e46968e5ae3adff2ce8e1302366cbd4b8cae8039320d7853f21f02" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.083242 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.096241 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.109898 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:18 crc kubenswrapper[4857]: E0318 14:30:18.110809 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-central-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.110843 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-central-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: E0318 14:30:18.110892 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="sg-core" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.110902 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="sg-core" Mar 18 14:30:18 crc kubenswrapper[4857]: E0318 14:30:18.110932 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-notification-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.110942 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-notification-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: E0318 14:30:18.110962 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="proxy-httpd" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.110972 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="proxy-httpd" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.111250 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-central-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.111286 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="sg-core" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.111304 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="proxy-httpd" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.111326 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="277feb07-ef83-422f-832e-7d37be471a7a" containerName="ceilometer-notification-agent" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.114297 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.117921 4857 scope.go:117] "RemoveContainer" containerID="066fd5f6a271ffc4e875b4c1310166e95380cc780fbe524787af7d25562be13c" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.117998 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.118157 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.123320 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.193053 4857 scope.go:117] "RemoveContainer" containerID="5104c7401c284a18bf8ca3655b3b6abe7f1f825f3f21d428ae2bf299d7b12655" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212270 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212484 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbwc\" (UniqueName: \"kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212523 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212568 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212585 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212611 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.212722 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316239 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316310 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316442 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316588 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffbwc\" (UniqueName: \"kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316630 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316667 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316688 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.316902 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.317283 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.323089 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.323593 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.325737 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.337064 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.346654 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffbwc\" (UniqueName: \"kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc\") pod \"ceilometer-0\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " pod="openstack/ceilometer-0" Mar 18 14:30:18 crc kubenswrapper[4857]: I0318 14:30:18.470624 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:19 crc kubenswrapper[4857]: I0318 14:30:19.354527 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277feb07-ef83-422f-832e-7d37be471a7a" path="/var/lib/kubelet/pods/277feb07-ef83-422f-832e-7d37be471a7a/volumes" Mar 18 14:30:19 crc kubenswrapper[4857]: W0318 14:30:19.363380 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83951418_11ce_418d_b66f_7c2829e16568.slice/crio-0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a WatchSource:0}: Error finding container 0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a: Status 404 returned error can't find the container with id 0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a Mar 18 14:30:19 crc kubenswrapper[4857]: I0318 14:30:19.380028 4857 generic.go:334] "Generic (PLEG): container finished" podID="f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" containerID="c31ab01511e35915445a574b0de906d965ac4f6132f2aa90ccbbc015f2e45e79" exitCode=0 Mar 18 14:30:19 crc kubenswrapper[4857]: I0318 14:30:19.380183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4f6x" event={"ID":"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc","Type":"ContainerDied","Data":"c31ab01511e35915445a574b0de906d965ac4f6132f2aa90ccbbc015f2e45e79"} Mar 18 14:30:19 crc kubenswrapper[4857]: I0318 14:30:19.383541 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:20 crc kubenswrapper[4857]: I0318 14:30:20.404664 4857 generic.go:334] "Generic (PLEG): container finished" podID="9e242cf0-f297-425c-8bc7-f6602a60faea" containerID="a0f655471d1e1343ce3fc487195d7d991fee99f4c39fb94478d8c4f416cd9e51" exitCode=0 Mar 18 14:30:20 crc kubenswrapper[4857]: I0318 14:30:20.404820 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4cc7-account-create-update-k2fkp" event={"ID":"9e242cf0-f297-425c-8bc7-f6602a60faea","Type":"ContainerDied","Data":"a0f655471d1e1343ce3fc487195d7d991fee99f4c39fb94478d8c4f416cd9e51"} Mar 18 14:30:20 crc kubenswrapper[4857]: I0318 14:30:20.408983 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerStarted","Data":"1de61d1d1510ea30bfd9a0d8584be87e1bdb5fbcdd0f3b85c8a2c2c73a6542a8"} Mar 18 14:30:20 crc kubenswrapper[4857]: I0318 14:30:20.409141 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerStarted","Data":"0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a"} Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.038634 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.165241 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts\") pod \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.165600 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pj54\" (UniqueName: \"kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54\") pod \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\" (UID: \"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc\") " Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.165729 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" (UID: "f99b0f79-9e9d-469e-90c8-3dbb0a5893fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.166934 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.182187 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54" (OuterVolumeSpecName: "kube-api-access-5pj54") pod "f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" (UID: "f99b0f79-9e9d-469e-90c8-3dbb0a5893fc"). InnerVolumeSpecName "kube-api-access-5pj54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.269706 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pj54\" (UniqueName: \"kubernetes.io/projected/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc-kube-api-access-5pj54\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.423119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-v4f6x" event={"ID":"f99b0f79-9e9d-469e-90c8-3dbb0a5893fc","Type":"ContainerDied","Data":"3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5"} Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.423193 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b51b97a0805cfbfbdeaf692f3ab9efdd8015f94a179b2285d2a0b0866be9af5" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.423261 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-v4f6x" Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.426918 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerStarted","Data":"c9246e81233a550573bc4ba1256d7c08bb110f0c6ee7e0823a74fb4e43ad623f"} Mar 18 14:30:21 crc kubenswrapper[4857]: I0318 14:30:21.959767 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.090404 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts\") pod \"9e242cf0-f297-425c-8bc7-f6602a60faea\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.090591 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh4hf\" (UniqueName: \"kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf\") pod \"9e242cf0-f297-425c-8bc7-f6602a60faea\" (UID: \"9e242cf0-f297-425c-8bc7-f6602a60faea\") " Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.090927 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e242cf0-f297-425c-8bc7-f6602a60faea" (UID: "9e242cf0-f297-425c-8bc7-f6602a60faea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.091983 4857 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e242cf0-f297-425c-8bc7-f6602a60faea-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.103056 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf" (OuterVolumeSpecName: "kube-api-access-dh4hf") pod "9e242cf0-f297-425c-8bc7-f6602a60faea" (UID: "9e242cf0-f297-425c-8bc7-f6602a60faea"). InnerVolumeSpecName "kube-api-access-dh4hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.194136 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh4hf\" (UniqueName: \"kubernetes.io/projected/9e242cf0-f297-425c-8bc7-f6602a60faea-kube-api-access-dh4hf\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.440508 4857 generic.go:334] "Generic (PLEG): container finished" podID="a48c014c-1b70-4d0c-b01b-9c1060620b0e" containerID="3f63307fe76eb7aff1a9297411edb40614fbf2e0d0c93a40fb6eb2bf63d20c99" exitCode=0 Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.440586 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" event={"ID":"a48c014c-1b70-4d0c-b01b-9c1060620b0e","Type":"ContainerDied","Data":"3f63307fe76eb7aff1a9297411edb40614fbf2e0d0c93a40fb6eb2bf63d20c99"} Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.444728 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4cc7-account-create-update-k2fkp" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.444773 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4cc7-account-create-update-k2fkp" event={"ID":"9e242cf0-f297-425c-8bc7-f6602a60faea","Type":"ContainerDied","Data":"fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291"} Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.444830 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fabc17cfd17207e676f5a11620762f7f070ffc99e3164d87bed345d2dc518291" Mar 18 14:30:22 crc kubenswrapper[4857]: I0318 14:30:22.448243 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerStarted","Data":"d8c431dea970535feb0393cdff88aee5068915fb73c41137ce7dac0fc68e3554"} Mar 18 14:30:23 crc kubenswrapper[4857]: I0318 14:30:23.925103 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.057571 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle\") pod \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.057713 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data\") pod \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.057763 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzg8b\" (UniqueName: \"kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b\") pod \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.057817 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts\") pod \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\" (UID: \"a48c014c-1b70-4d0c-b01b-9c1060620b0e\") " Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.063502 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b" (OuterVolumeSpecName: "kube-api-access-mzg8b") pod "a48c014c-1b70-4d0c-b01b-9c1060620b0e" (UID: "a48c014c-1b70-4d0c-b01b-9c1060620b0e"). InnerVolumeSpecName "kube-api-access-mzg8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.065382 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts" (OuterVolumeSpecName: "scripts") pod "a48c014c-1b70-4d0c-b01b-9c1060620b0e" (UID: "a48c014c-1b70-4d0c-b01b-9c1060620b0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.090180 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a48c014c-1b70-4d0c-b01b-9c1060620b0e" (UID: "a48c014c-1b70-4d0c-b01b-9c1060620b0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.091447 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data" (OuterVolumeSpecName: "config-data") pod "a48c014c-1b70-4d0c-b01b-9c1060620b0e" (UID: "a48c014c-1b70-4d0c-b01b-9c1060620b0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.160665 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.161052 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.161068 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzg8b\" (UniqueName: \"kubernetes.io/projected/a48c014c-1b70-4d0c-b01b-9c1060620b0e-kube-api-access-mzg8b\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.161085 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a48c014c-1b70-4d0c-b01b-9c1060620b0e-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.472569 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" event={"ID":"a48c014c-1b70-4d0c-b01b-9c1060620b0e","Type":"ContainerDied","Data":"7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89"} Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.472617 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aede1f7ef962915d040c890d173a9fc90dd910fae736d252f35b9c200479e89" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.472679 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-h7rnb" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.619806 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 14:30:24 crc kubenswrapper[4857]: E0318 14:30:24.620433 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" containerName="mariadb-database-create" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620461 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" containerName="mariadb-database-create" Mar 18 14:30:24 crc kubenswrapper[4857]: E0318 14:30:24.620489 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48c014c-1b70-4d0c-b01b-9c1060620b0e" containerName="nova-cell0-conductor-db-sync" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620498 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48c014c-1b70-4d0c-b01b-9c1060620b0e" containerName="nova-cell0-conductor-db-sync" Mar 18 14:30:24 crc kubenswrapper[4857]: E0318 14:30:24.620551 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e242cf0-f297-425c-8bc7-f6602a60faea" containerName="mariadb-account-create-update" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620561 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e242cf0-f297-425c-8bc7-f6602a60faea" containerName="mariadb-account-create-update" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620850 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e242cf0-f297-425c-8bc7-f6602a60faea" containerName="mariadb-account-create-update" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620869 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" containerName="mariadb-database-create" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.620881 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48c014c-1b70-4d0c-b01b-9c1060620b0e" containerName="nova-cell0-conductor-db-sync" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.621710 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.628247 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fqdnm" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.628515 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.636158 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.783130 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.783426 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.783561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwg4d\" (UniqueName: \"kubernetes.io/projected/09373ed6-5d90-471d-a45c-4f39dc46caf8-kube-api-access-zwg4d\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.886603 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.886676 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.886794 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwg4d\" (UniqueName: \"kubernetes.io/projected/09373ed6-5d90-471d-a45c-4f39dc46caf8-kube-api-access-zwg4d\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.892635 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.895679 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09373ed6-5d90-471d-a45c-4f39dc46caf8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.909322 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwg4d\" (UniqueName: \"kubernetes.io/projected/09373ed6-5d90-471d-a45c-4f39dc46caf8-kube-api-access-zwg4d\") pod \"nova-cell0-conductor-0\" (UID: \"09373ed6-5d90-471d-a45c-4f39dc46caf8\") " pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:24 crc kubenswrapper[4857]: I0318 14:30:24.947888 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:25 crc kubenswrapper[4857]: I0318 14:30:25.445354 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 14:30:25 crc kubenswrapper[4857]: I0318 14:30:25.499384 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"09373ed6-5d90-471d-a45c-4f39dc46caf8","Type":"ContainerStarted","Data":"d11003b9050e6cf9ce21b8877c6e213aa5d0fdc7c51bbcb1c91a0f33d0fad282"} Mar 18 14:30:25 crc kubenswrapper[4857]: I0318 14:30:25.504953 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerStarted","Data":"7a8bcdcf54262706908cba206ed52a032a504d1886a28f67bc1bc5fb9b17aba5"} Mar 18 14:30:25 crc kubenswrapper[4857]: I0318 14:30:25.505217 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:30:25 crc kubenswrapper[4857]: I0318 14:30:25.532216 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.467150019 podStartE2EDuration="7.532184169s" podCreationTimestamp="2026-03-18 14:30:18 +0000 UTC" firstStartedPulling="2026-03-18 14:30:19.369500227 +0000 UTC m=+1803.498628684" lastFinishedPulling="2026-03-18 14:30:24.434534387 +0000 UTC m=+1808.563662834" observedRunningTime="2026-03-18 14:30:25.524828293 +0000 UTC m=+1809.653956760" watchObservedRunningTime="2026-03-18 14:30:25.532184169 +0000 UTC m=+1809.661312626" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.153843 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-sb47k"] Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.156494 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.167092 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.167219 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.167438 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-fvfqd" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.168578 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.195161 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-sb47k"] Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.331313 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqp2\" (UniqueName: \"kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.331514 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.331563 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.331979 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.435056 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.435222 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqp2\" (UniqueName: \"kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.435311 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.435338 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.440494 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.441324 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.447589 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.462061 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqp2\" (UniqueName: \"kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2\") pod \"aodh-db-sync-sb47k\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.491908 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.519265 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"09373ed6-5d90-471d-a45c-4f39dc46caf8","Type":"ContainerStarted","Data":"5af1ee348cc4c4614679c8cc4f2be6a439e60b9a071b8efc57814e1caa3821cf"} Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.519489 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:26 crc kubenswrapper[4857]: I0318 14:30:26.552877 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.5528450830000002 podStartE2EDuration="2.552845083s" podCreationTimestamp="2026-03-18 14:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:26.542848732 +0000 UTC m=+1810.671977189" watchObservedRunningTime="2026-03-18 14:30:26.552845083 +0000 UTC m=+1810.681973530" Mar 18 14:30:27 crc kubenswrapper[4857]: I0318 14:30:27.298741 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-sb47k"] Mar 18 14:30:27 crc kubenswrapper[4857]: I0318 14:30:27.536205 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sb47k" event={"ID":"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c","Type":"ContainerStarted","Data":"0179e562b3be530177a889724f55ea007cbe8c21aa5356c15217789ac09a87a0"} Mar 18 14:30:29 crc kubenswrapper[4857]: I0318 14:30:29.165038 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:30:29 crc kubenswrapper[4857]: E0318 14:30:29.166093 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:30:34 crc kubenswrapper[4857]: I0318 14:30:34.976688 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 18 14:30:35 crc kubenswrapper[4857]: I0318 14:30:35.001351 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sb47k" event={"ID":"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c","Type":"ContainerStarted","Data":"b34e957dfaf8a14a81c6916341d900fdf555b69af4861ae62a71754e5b132d09"} Mar 18 14:30:35 crc kubenswrapper[4857]: I0318 14:30:35.037966 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-sb47k" podStartSLOduration=2.697625737 podStartE2EDuration="9.037943749s" podCreationTimestamp="2026-03-18 14:30:26 +0000 UTC" firstStartedPulling="2026-03-18 14:30:27.29261851 +0000 UTC m=+1811.421746967" lastFinishedPulling="2026-03-18 14:30:33.632936522 +0000 UTC m=+1817.762064979" observedRunningTime="2026-03-18 14:30:35.023000613 +0000 UTC m=+1819.152129070" watchObservedRunningTime="2026-03-18 14:30:35.037943749 +0000 UTC m=+1819.167072206" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.045993 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pxxc5"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.050747 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.054501 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.071054 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pxxc5"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.073178 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.177010 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.177126 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.177458 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.177515 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzqq5\" (UniqueName: \"kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.622800 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.622893 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.622925 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzqq5\" (UniqueName: \"kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.623214 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.655574 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.659546 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.680189 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.700935 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzqq5\" (UniqueName: \"kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5\") pod \"nova-cell0-cell-mapping-pxxc5\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.709885 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.712021 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.717488 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.725963 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvg4s\" (UniqueName: \"kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.726107 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.726137 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.740176 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.811375 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.826200 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.831392 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.831500 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.833248 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvg4s\" (UniqueName: \"kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.834300 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.837202 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.837957 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.844818 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:30:37 crc kubenswrapper[4857]: I0318 14:30:37.865600 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvg4s\" (UniqueName: \"kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s\") pod \"nova-scheduler-0\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " pod="openstack/nova-scheduler-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.272125 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.278562 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.282145 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.282422 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.282569 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66hx\" (UniqueName: \"kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.282776 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.344737 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.355392 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.381387 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.407321 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.411629 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.415192 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.415300 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66hx\" (UniqueName: \"kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.415498 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.416304 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.426245 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.430375 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.450508 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66hx\" (UniqueName: \"kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx\") pod \"nova-api-0\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.464842 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.546533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.547453 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.547855 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zwgz\" (UniqueName: \"kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.906034 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.912089 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.922148 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 14:30:38 crc kubenswrapper[4857]: I0318 14:30:38.978364 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.054706 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.055050 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.061468 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zwgz\" (UniqueName: \"kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.077850 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.078215 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.111147 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zwgz\" (UniqueName: \"kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.165837 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.745473 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.792024 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.793485 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.801889 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.802177 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.802308 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.802625 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdf2w\" (UniqueName: \"kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.917695 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919478 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919700 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdf2w\" (UniqueName: \"kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919764 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919843 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5fb\" (UniqueName: \"kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919916 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.919985 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.920140 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.920251 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.920318 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.920902 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.945094 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.961593 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdf2w\" (UniqueName: \"kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:39 crc kubenswrapper[4857]: I0318 14:30:39.962318 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data\") pod \"nova-metadata-0\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " pod="openstack/nova-metadata-0" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.026448 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.026512 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5fb\" (UniqueName: \"kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.026562 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.026625 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.026767 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.027539 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.028440 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.027491 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.029091 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.029320 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.036404 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.060621 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5fb\" (UniqueName: \"kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb\") pod \"dnsmasq-dns-9b86998b5-82vv5\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.221511 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.222812 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:40 crc kubenswrapper[4857]: I0318 14:30:40.846335 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pxxc5"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.021195 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pxxc5" event={"ID":"0b95917b-c40b-4bb7-8064-4d297f45711d","Type":"ContainerStarted","Data":"01dc2bb0a3b2c2b1aacba8c3205814d71c9f8c634f93b7085c3e4cfe29ae55aa"} Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.042652 4857 generic.go:334] "Generic (PLEG): container finished" podID="ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" containerID="b34e957dfaf8a14a81c6916341d900fdf555b69af4861ae62a71754e5b132d09" exitCode=0 Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.042706 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sb47k" event={"ID":"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c","Type":"ContainerDied","Data":"b34e957dfaf8a14a81c6916341d900fdf555b69af4861ae62a71754e5b132d09"} Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.078097 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.349779 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xv86z"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.352427 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.356692 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.356951 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.405045 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.405119 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.405399 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.405430 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slcsb\" (UniqueName: \"kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.412941 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xv86z"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.479353 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.507870 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.508152 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slcsb\" (UniqueName: \"kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.508396 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.508475 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.517505 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.518477 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.519413 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.529931 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.542426 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slcsb\" (UniqueName: \"kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb\") pod \"nova-cell1-conductor-db-sync-xv86z\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.629612 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.699832 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:30:41 crc kubenswrapper[4857]: I0318 14:30:41.788597 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.357547 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e0b9685-6239-42b2-8e7c-c9b29baa81de","Type":"ContainerStarted","Data":"437a1c3efc8be5d313573d0dc16ca66a2feecd8ffa95dcab6b538346117ccced"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.385192 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerStarted","Data":"0155e9a54aa7211c5b99bcda827941507cd2fd7f13fcfdcd96092be101ec8f39"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.447232 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pxxc5" event={"ID":"0b95917b-c40b-4bb7-8064-4d297f45711d","Type":"ContainerStarted","Data":"1b54f4453b663088b33f58cb6d42727ccb556d4b30e9ae03b4455ae278cd5a0a"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.453941 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfe4db16-d4dc-4222-87d6-71dc331417d5","Type":"ContainerStarted","Data":"21fe3b714472b065251859a2e29e2e86d98bc7acb4cd8f4802a15ab9ac9d508d"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.454983 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" event={"ID":"96e4143b-24b7-4dcd-a77c-42c89a55eea7","Type":"ContainerStarted","Data":"35cc88714d1336c1cd9722debd15d46528cf35f36f383c595f5ddd4de04f4faf"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.456266 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerStarted","Data":"8c5035fa4dfd43385020568c8c6be88f789f76bf0955026bbc964aad1ad399b3"} Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.482041 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pxxc5" podStartSLOduration=5.481995396 podStartE2EDuration="5.481995396s" podCreationTimestamp="2026-03-18 14:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:42.480108548 +0000 UTC m=+1826.609237005" watchObservedRunningTime="2026-03-18 14:30:42.481995396 +0000 UTC m=+1826.611123853" Mar 18 14:30:42 crc kubenswrapper[4857]: I0318 14:30:42.921744 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xv86z"] Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.205147 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.319882 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqp2\" (UniqueName: \"kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2\") pod \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.320176 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle\") pod \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.320303 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts\") pod \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.320387 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data\") pod \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\" (UID: \"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c\") " Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.364933 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2" (OuterVolumeSpecName: "kube-api-access-tkqp2") pod "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" (UID: "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c"). InnerVolumeSpecName "kube-api-access-tkqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.413343 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts" (OuterVolumeSpecName: "scripts") pod "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" (UID: "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.421642 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data" (OuterVolumeSpecName: "config-data") pod "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" (UID: "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.732492 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" (UID: "ecf63b2b-fa66-4a0d-8a89-d7a07693b00c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.792813 4857 generic.go:334] "Generic (PLEG): container finished" podID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerID="bd461810cd5f1e28c1afed5289713d3b1e5055b946713d183b6d63186ed04cbb" exitCode=0 Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.793177 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" event={"ID":"96e4143b-24b7-4dcd-a77c-42c89a55eea7","Type":"ContainerDied","Data":"bd461810cd5f1e28c1afed5289713d3b1e5055b946713d183b6d63186ed04cbb"} Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.808667 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xv86z" event={"ID":"fdab76a8-c643-44eb-8fe5-7fd0ab42f634","Type":"ContainerStarted","Data":"3f1f38b13501dc913a7deab83524ae568d53c4e1a39001c748dfc43d60561b0e"} Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.813101 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-sb47k" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.813201 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-sb47k" event={"ID":"ecf63b2b-fa66-4a0d-8a89-d7a07693b00c","Type":"ContainerDied","Data":"0179e562b3be530177a889724f55ea007cbe8c21aa5356c15217789ac09a87a0"} Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.813234 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0179e562b3be530177a889724f55ea007cbe8c21aa5356c15217789ac09a87a0" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.842017 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.843959 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.849068 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqp2\" (UniqueName: \"kubernetes.io/projected/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-kube-api-access-tkqp2\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:43 crc kubenswrapper[4857]: I0318 14:30:43.849112 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.165426 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:30:44 crc kubenswrapper[4857]: E0318 14:30:44.165879 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.773028 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.801298 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.852998 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" event={"ID":"96e4143b-24b7-4dcd-a77c-42c89a55eea7","Type":"ContainerStarted","Data":"305b51e65d227539188c8f938554bdd396d7221363cb9dfda589f97ed5f7713e"} Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.853785 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.866329 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xv86z" event={"ID":"fdab76a8-c643-44eb-8fe5-7fd0ab42f634","Type":"ContainerStarted","Data":"49bbc9ac34c00e20ba6e4558560eedf0334afbf8faf6fb5efbe5ac367c8d9ac8"} Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.900373 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" podStartSLOduration=6.900345252 podStartE2EDuration="6.900345252s" podCreationTimestamp="2026-03-18 14:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:44.873986309 +0000 UTC m=+1829.003114766" watchObservedRunningTime="2026-03-18 14:30:44.900345252 +0000 UTC m=+1829.029473709" Mar 18 14:30:44 crc kubenswrapper[4857]: I0318 14:30:44.911128 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xv86z" podStartSLOduration=3.911106403 podStartE2EDuration="3.911106403s" podCreationTimestamp="2026-03-18 14:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:30:44.896798503 +0000 UTC m=+1829.025926960" watchObservedRunningTime="2026-03-18 14:30:44.911106403 +0000 UTC m=+1829.040234860" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.211042 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 18 14:30:46 crc kubenswrapper[4857]: E0318 14:30:46.212013 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" containerName="aodh-db-sync" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.212037 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" containerName="aodh-db-sync" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.212451 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" containerName="aodh-db-sync" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.215437 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.228383 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-fvfqd" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.228913 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.231390 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.259358 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.259713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.260038 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.260112 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbvm\" (UniqueName: \"kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.273829 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.364633 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.364697 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbvm\" (UniqueName: \"kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.364795 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.364903 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.391454 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.404293 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.404467 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbvm\" (UniqueName: \"kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.418588 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data\") pod \"aodh-0\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " pod="openstack/aodh-0" Mar 18 14:30:46 crc kubenswrapper[4857]: I0318 14:30:46.558514 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:30:48 crc kubenswrapper[4857]: I0318 14:30:48.703704 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 18 14:30:50 crc kubenswrapper[4857]: I0318 14:30:50.228566 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:30:50 crc kubenswrapper[4857]: I0318 14:30:50.353876 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:30:50 crc kubenswrapper[4857]: I0318 14:30:50.354337 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="dnsmasq-dns" containerID="cri-o://3eb1f6e0247b7e807c9dc1285e267168f64961b20b7feb2592d6cf7343c30ade" gracePeriod=10 Mar 18 14:30:50 crc kubenswrapper[4857]: I0318 14:30:50.893400 4857 generic.go:334] "Generic (PLEG): container finished" podID="0626adfc-bb2b-4796-bd56-551264758fd6" containerID="3eb1f6e0247b7e807c9dc1285e267168f64961b20b7feb2592d6cf7343c30ade" exitCode=0 Mar 18 14:30:50 crc kubenswrapper[4857]: I0318 14:30:50.893873 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" event={"ID":"0626adfc-bb2b-4796-bd56-551264758fd6","Type":"ContainerDied","Data":"3eb1f6e0247b7e807c9dc1285e267168f64961b20b7feb2592d6cf7343c30ade"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.412143 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.763057 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.792687 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.792803 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.792883 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.792973 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqclj\" (UniqueName: \"kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.793776 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.793935 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc\") pod \"0626adfc-bb2b-4796-bd56-551264758fd6\" (UID: \"0626adfc-bb2b-4796-bd56-551264758fd6\") " Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.823215 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj" (OuterVolumeSpecName: "kube-api-access-qqclj") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "kube-api-access-qqclj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.900098 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqclj\" (UniqueName: \"kubernetes.io/projected/0626adfc-bb2b-4796-bd56-551264758fd6-kube-api-access-qqclj\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.923168 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.925226 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.931343 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfe4db16-d4dc-4222-87d6-71dc331417d5","Type":"ContainerStarted","Data":"b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.946544 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.955077 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" event={"ID":"0626adfc-bb2b-4796-bd56-551264758fd6","Type":"ContainerDied","Data":"fffb8cf206291c4279602b5ab13d0593d8bc344318dd234c41351bc2c22a3421"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.955150 4857 scope.go:117] "RemoveContainer" containerID="3eb1f6e0247b7e807c9dc1285e267168f64961b20b7feb2592d6cf7343c30ade" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.955323 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-62glg" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.961496 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.659260716 podStartE2EDuration="14.961473194s" podCreationTimestamp="2026-03-18 14:30:37 +0000 UTC" firstStartedPulling="2026-03-18 14:30:41.119206631 +0000 UTC m=+1825.248335088" lastFinishedPulling="2026-03-18 14:30:50.421419109 +0000 UTC m=+1834.550547566" observedRunningTime="2026-03-18 14:30:51.959414223 +0000 UTC m=+1836.088542680" watchObservedRunningTime="2026-03-18 14:30:51.961473194 +0000 UTC m=+1836.090601661" Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.964227 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerStarted","Data":"a7d683d95b64acb6b301dbe1fa93c50e58aa6c41884ed74726f5a30aba531446"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.964303 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerStarted","Data":"702d7bce8e59324645dd4740ced3e9950afaade8f68e44c2ea5847a3ca65bcc3"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.964470 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-log" containerID="cri-o://702d7bce8e59324645dd4740ced3e9950afaade8f68e44c2ea5847a3ca65bcc3" gracePeriod=30 Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.964795 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-metadata" containerID="cri-o://a7d683d95b64acb6b301dbe1fa93c50e58aa6c41884ed74726f5a30aba531446" gracePeriod=30 Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.969397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e0b9685-6239-42b2-8e7c-c9b29baa81de","Type":"ContainerStarted","Data":"a2e0bcd7972be629675ee676f80564735633bd51ab5e82050b9dbb6880b10950"} Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.969505 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a2e0bcd7972be629675ee676f80564735633bd51ab5e82050b9dbb6880b10950" gracePeriod=30 Mar 18 14:30:51 crc kubenswrapper[4857]: I0318 14:30:51.974890 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerStarted","Data":"870e8035908c7b736cb607c62d05afab8b3423a6ce2f0592f0898ea036099004"} Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.008516 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerStarted","Data":"baac5d7a84a41f29aca8b8d37c382e8a428f2044e60b2ff3ddd35e1c422a4dda"} Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.010476 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.010498 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.010522 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.015186 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.028408 4857 scope.go:117] "RemoveContainer" containerID="88e55f5c97ff5f99b00940de098bce5d35e573430aa7256f4339ec1ad0b4dc3b" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.031959 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config" (OuterVolumeSpecName: "config") pod "0626adfc-bb2b-4796-bd56-551264758fd6" (UID: "0626adfc-bb2b-4796-bd56-551264758fd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.036882 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.41442658 podStartE2EDuration="14.036853251s" podCreationTimestamp="2026-03-18 14:30:38 +0000 UTC" firstStartedPulling="2026-03-18 14:30:41.8165535 +0000 UTC m=+1825.945681957" lastFinishedPulling="2026-03-18 14:30:50.438980171 +0000 UTC m=+1834.568108628" observedRunningTime="2026-03-18 14:30:51.999297176 +0000 UTC m=+1836.128425633" watchObservedRunningTime="2026-03-18 14:30:52.036853251 +0000 UTC m=+1836.165981708" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.074350 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=6.089983585 podStartE2EDuration="15.074322624s" podCreationTimestamp="2026-03-18 14:30:37 +0000 UTC" firstStartedPulling="2026-03-18 14:30:41.422134214 +0000 UTC m=+1825.551262671" lastFinishedPulling="2026-03-18 14:30:50.406473253 +0000 UTC m=+1834.535601710" observedRunningTime="2026-03-18 14:30:52.025138417 +0000 UTC m=+1836.154266884" watchObservedRunningTime="2026-03-18 14:30:52.074322624 +0000 UTC m=+1836.203451081" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.114215 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.114253 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0626adfc-bb2b-4796-bd56-551264758fd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.180511 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.313306 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:30:52 crc kubenswrapper[4857]: I0318 14:30:52.328017 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-62glg"] Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.022167 4857 generic.go:334] "Generic (PLEG): container finished" podID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerID="702d7bce8e59324645dd4740ced3e9950afaade8f68e44c2ea5847a3ca65bcc3" exitCode=143 Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.022242 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerDied","Data":"702d7bce8e59324645dd4740ced3e9950afaade8f68e44c2ea5847a3ca65bcc3"} Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.024937 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerStarted","Data":"0d4e079ecbde55db9302ed1a05cc3f9be891140a6926c3c6fa1c5b673519c03e"} Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.026765 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerStarted","Data":"d85493ae0093621ece155b3acfda7c0e4e70781954ef86bb9231b878a01ee7c1"} Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.052940 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.089499319 podStartE2EDuration="16.052911261s" podCreationTimestamp="2026-03-18 14:30:37 +0000 UTC" firstStartedPulling="2026-03-18 14:30:41.456060928 +0000 UTC m=+1825.585189385" lastFinishedPulling="2026-03-18 14:30:50.41947286 +0000 UTC m=+1834.548601327" observedRunningTime="2026-03-18 14:30:53.044659093 +0000 UTC m=+1837.173787550" watchObservedRunningTime="2026-03-18 14:30:53.052911261 +0000 UTC m=+1837.182039718" Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.117429 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.117817 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-central-agent" containerID="cri-o://1de61d1d1510ea30bfd9a0d8584be87e1bdb5fbcdd0f3b85c8a2c2c73a6542a8" gracePeriod=30 Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.117884 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="sg-core" containerID="cri-o://d8c431dea970535feb0393cdff88aee5068915fb73c41137ce7dac0fc68e3554" gracePeriod=30 Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.117917 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="proxy-httpd" containerID="cri-o://7a8bcdcf54262706908cba206ed52a032a504d1886a28f67bc1bc5fb9b17aba5" gracePeriod=30 Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.117962 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-notification-agent" containerID="cri-o://c9246e81233a550573bc4ba1256d7c08bb110f0c6ee7e0823a74fb4e43ad623f" gracePeriod=30 Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.182394 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" path="/var/lib/kubelet/pods/0626adfc-bb2b-4796-bd56-551264758fd6/volumes" Mar 18 14:30:53 crc kubenswrapper[4857]: I0318 14:30:53.279578 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054025 4857 generic.go:334] "Generic (PLEG): container finished" podID="83951418-11ce-418d-b66f-7c2829e16568" containerID="7a8bcdcf54262706908cba206ed52a032a504d1886a28f67bc1bc5fb9b17aba5" exitCode=0 Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054401 4857 generic.go:334] "Generic (PLEG): container finished" podID="83951418-11ce-418d-b66f-7c2829e16568" containerID="d8c431dea970535feb0393cdff88aee5068915fb73c41137ce7dac0fc68e3554" exitCode=2 Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054415 4857 generic.go:334] "Generic (PLEG): container finished" podID="83951418-11ce-418d-b66f-7c2829e16568" containerID="1de61d1d1510ea30bfd9a0d8584be87e1bdb5fbcdd0f3b85c8a2c2c73a6542a8" exitCode=0 Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054114 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerDied","Data":"7a8bcdcf54262706908cba206ed52a032a504d1886a28f67bc1bc5fb9b17aba5"} Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054632 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerDied","Data":"d8c431dea970535feb0393cdff88aee5068915fb73c41137ce7dac0fc68e3554"} Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.054650 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerDied","Data":"1de61d1d1510ea30bfd9a0d8584be87e1bdb5fbcdd0f3b85c8a2c2c73a6542a8"} Mar 18 14:30:54 crc kubenswrapper[4857]: I0318 14:30:54.765918 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.078199 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerStarted","Data":"b3c68814516d501e9473d8643b8af560c6808e109b3d09aaf1b86447cb4eaf51"} Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.105601 4857 generic.go:334] "Generic (PLEG): container finished" podID="83951418-11ce-418d-b66f-7c2829e16568" containerID="c9246e81233a550573bc4ba1256d7c08bb110f0c6ee7e0823a74fb4e43ad623f" exitCode=0 Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.105660 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerDied","Data":"c9246e81233a550573bc4ba1256d7c08bb110f0c6ee7e0823a74fb4e43ad623f"} Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.105703 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83951418-11ce-418d-b66f-7c2829e16568","Type":"ContainerDied","Data":"0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a"} Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.105719 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae44427b3741bef239050bea929f00963cc8265f8bb5db35ea877ac0c7ccb9a" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.230845 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320034 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320236 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320296 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320346 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320400 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffbwc\" (UniqueName: \"kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320435 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.320538 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd\") pod \"83951418-11ce-418d-b66f-7c2829e16568\" (UID: \"83951418-11ce-418d-b66f-7c2829e16568\") " Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.321586 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.321609 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.322485 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.322518 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83951418-11ce-418d-b66f-7c2829e16568-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.329960 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts" (OuterVolumeSpecName: "scripts") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.345070 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc" (OuterVolumeSpecName: "kube-api-access-ffbwc") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "kube-api-access-ffbwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.425833 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.426198 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffbwc\" (UniqueName: \"kubernetes.io/projected/83951418-11ce-418d-b66f-7c2829e16568-kube-api-access-ffbwc\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.487361 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.503265 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.529281 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.529318 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.581915 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data" (OuterVolumeSpecName: "config-data") pod "83951418-11ce-418d-b66f-7c2829e16568" (UID: "83951418-11ce-418d-b66f-7c2829e16568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:55 crc kubenswrapper[4857]: I0318 14:30:55.631927 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83951418-11ce-418d-b66f-7c2829e16568-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.116252 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.175942 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.239324 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.271531 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272254 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="init" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272278 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="init" Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272296 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="proxy-httpd" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272303 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="proxy-httpd" Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272323 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="dnsmasq-dns" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272329 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="dnsmasq-dns" Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272339 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="sg-core" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272346 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="sg-core" Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272375 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-central-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272382 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-central-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: E0318 14:30:56.272413 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-notification-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272420 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-notification-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272663 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-notification-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272689 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="proxy-httpd" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272713 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="sg-core" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272727 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="0626adfc-bb2b-4796-bd56-551264758fd6" containerName="dnsmasq-dns" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.272741 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83951418-11ce-418d-b66f-7c2829e16568" containerName="ceilometer-central-agent" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.275405 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.279442 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.279806 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.288437 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352618 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352686 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352847 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2fsc\" (UniqueName: \"kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352922 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352948 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.352972 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.353000 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455670 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455727 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455857 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2fsc\" (UniqueName: \"kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455920 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455943 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455962 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.455985 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.456703 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.456852 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.460435 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.460625 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.461917 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.478805 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.480027 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2fsc\" (UniqueName: \"kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc\") pod \"ceilometer-0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.601141 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.755675 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.806260 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.806511 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" containerName="kube-state-metrics" containerID="cri-o://8829a66b8f82391a5de78501b48d419be4736a59d5607024bbb3678f5ab6ae0b" gracePeriod=30 Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.920898 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:30:56 crc kubenswrapper[4857]: I0318 14:30:56.920964 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.013847 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.014106 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="fc05a021-e410-4413-8e09-99db47cc4ee5" containerName="mysqld-exporter" containerID="cri-o://a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3" gracePeriod=30 Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.174293 4857 generic.go:334] "Generic (PLEG): container finished" podID="0b95917b-c40b-4bb7-8064-4d297f45711d" containerID="1b54f4453b663088b33f58cb6d42727ccb556d4b30e9ae03b4455ae278cd5a0a" exitCode=0 Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.229566 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:30:57 crc kubenswrapper[4857]: E0318 14:30:57.230068 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.235273 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83951418-11ce-418d-b66f-7c2829e16568" path="/var/lib/kubelet/pods/83951418-11ce-418d-b66f-7c2829e16568/volumes" Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.236330 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pxxc5" event={"ID":"0b95917b-c40b-4bb7-8064-4d297f45711d","Type":"ContainerDied","Data":"1b54f4453b663088b33f58cb6d42727ccb556d4b30e9ae03b4455ae278cd5a0a"} Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.358901 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.792970 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.948731 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle\") pod \"fc05a021-e410-4413-8e09-99db47cc4ee5\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.949528 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x78f\" (UniqueName: \"kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f\") pod \"fc05a021-e410-4413-8e09-99db47cc4ee5\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.949959 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data\") pod \"fc05a021-e410-4413-8e09-99db47cc4ee5\" (UID: \"fc05a021-e410-4413-8e09-99db47cc4ee5\") " Mar 18 14:30:57 crc kubenswrapper[4857]: I0318 14:30:57.959796 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f" (OuterVolumeSpecName: "kube-api-access-9x78f") pod "fc05a021-e410-4413-8e09-99db47cc4ee5" (UID: "fc05a021-e410-4413-8e09-99db47cc4ee5"). InnerVolumeSpecName "kube-api-access-9x78f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.048447 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc05a021-e410-4413-8e09-99db47cc4ee5" (UID: "fc05a021-e410-4413-8e09-99db47cc4ee5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.054604 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.055897 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x78f\" (UniqueName: \"kubernetes.io/projected/fc05a021-e410-4413-8e09-99db47cc4ee5-kube-api-access-9x78f\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.057544 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data" (OuterVolumeSpecName: "config-data") pod "fc05a021-e410-4413-8e09-99db47cc4ee5" (UID: "fc05a021-e410-4413-8e09-99db47cc4ee5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.160300 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc05a021-e410-4413-8e09-99db47cc4ee5-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.194312 4857 generic.go:334] "Generic (PLEG): container finished" podID="fc05a021-e410-4413-8e09-99db47cc4ee5" containerID="a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3" exitCode=2 Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.194402 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fc05a021-e410-4413-8e09-99db47cc4ee5","Type":"ContainerDied","Data":"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3"} Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.194725 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fc05a021-e410-4413-8e09-99db47cc4ee5","Type":"ContainerDied","Data":"2f7c4034095106fbb2f8043c1ccfce87666bbe26113a2b18efca506fc50ea73e"} Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.194840 4857 scope.go:117] "RemoveContainer" containerID="a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.194576 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.199336 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerStarted","Data":"43d97d51cbbe91e25828c6440d45e720def5f84feddc4dceaa2c979b203e9992"} Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.226946 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.228452 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.244943 4857 generic.go:334] "Generic (PLEG): container finished" podID="fdab76a8-c643-44eb-8fe5-7fd0ab42f634" containerID="49bbc9ac34c00e20ba6e4558560eedf0334afbf8faf6fb5efbe5ac367c8d9ac8" exitCode=0 Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.245023 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xv86z" event={"ID":"fdab76a8-c643-44eb-8fe5-7fd0ab42f634","Type":"ContainerDied","Data":"49bbc9ac34c00e20ba6e4558560eedf0334afbf8faf6fb5efbe5ac367c8d9ac8"} Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.274037 4857 generic.go:334] "Generic (PLEG): container finished" podID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" containerID="8829a66b8f82391a5de78501b48d419be4736a59d5607024bbb3678f5ab6ae0b" exitCode=2 Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.274372 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8b53cfe-8acc-431c-be7e-b6d48ce587a8","Type":"ContainerDied","Data":"8829a66b8f82391a5de78501b48d419be4736a59d5607024bbb3678f5ab6ae0b"} Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.280516 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.312180 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.329565 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.347415 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:58 crc kubenswrapper[4857]: E0318 14:30:58.348110 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc05a021-e410-4413-8e09-99db47cc4ee5" containerName="mysqld-exporter" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.348141 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc05a021-e410-4413-8e09-99db47cc4ee5" containerName="mysqld-exporter" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.348469 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc05a021-e410-4413-8e09-99db47cc4ee5" containerName="mysqld-exporter" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.349450 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.353452 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.353673 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.359906 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.398150 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.483946 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-config-data\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.484050 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.484151 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmbqd\" (UniqueName: \"kubernetes.io/projected/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-kube-api-access-nmbqd\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.484301 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.587112 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmbqd\" (UniqueName: \"kubernetes.io/projected/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-kube-api-access-nmbqd\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.587262 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.587375 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-config-data\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.587446 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.595827 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-config-data\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.614603 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.615379 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.626963 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmbqd\" (UniqueName: \"kubernetes.io/projected/f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c-kube-api-access-nmbqd\") pod \"mysqld-exporter-0\" (UID: \"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c\") " pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.678034 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.856893 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.869345 4857 scope.go:117] "RemoveContainer" containerID="a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3" Mar 18 14:30:58 crc kubenswrapper[4857]: E0318 14:30:58.870084 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3\": container with ID starting with a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3 not found: ID does not exist" containerID="a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.870128 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3"} err="failed to get container status \"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3\": rpc error: code = NotFound desc = could not find container \"a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3\": container with ID starting with a22ad1f0a4bc5943eb95b940a116ce7119e35a2e118f9a26bfdfeab7f38dc3d3 not found: ID does not exist" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.919383 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.919423 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.936219 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:58 crc kubenswrapper[4857]: I0318 14:30:58.999780 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk7bq\" (UniqueName: \"kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq\") pod \"e8b53cfe-8acc-431c-be7e-b6d48ce587a8\" (UID: \"e8b53cfe-8acc-431c-be7e-b6d48ce587a8\") " Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.008997 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq" (OuterVolumeSpecName: "kube-api-access-vk7bq") pod "e8b53cfe-8acc-431c-be7e-b6d48ce587a8" (UID: "e8b53cfe-8acc-431c-be7e-b6d48ce587a8"). InnerVolumeSpecName "kube-api-access-vk7bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.102536 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts\") pod \"0b95917b-c40b-4bb7-8064-4d297f45711d\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.102657 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data\") pod \"0b95917b-c40b-4bb7-8064-4d297f45711d\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.102694 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle\") pod \"0b95917b-c40b-4bb7-8064-4d297f45711d\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.102735 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzqq5\" (UniqueName: \"kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5\") pod \"0b95917b-c40b-4bb7-8064-4d297f45711d\" (UID: \"0b95917b-c40b-4bb7-8064-4d297f45711d\") " Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.104217 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk7bq\" (UniqueName: \"kubernetes.io/projected/e8b53cfe-8acc-431c-be7e-b6d48ce587a8-kube-api-access-vk7bq\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.110461 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5" (OuterVolumeSpecName: "kube-api-access-jzqq5") pod "0b95917b-c40b-4bb7-8064-4d297f45711d" (UID: "0b95917b-c40b-4bb7-8064-4d297f45711d"). InnerVolumeSpecName "kube-api-access-jzqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.111107 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts" (OuterVolumeSpecName: "scripts") pod "0b95917b-c40b-4bb7-8064-4d297f45711d" (UID: "0b95917b-c40b-4bb7-8064-4d297f45711d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.151454 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data" (OuterVolumeSpecName: "config-data") pod "0b95917b-c40b-4bb7-8064-4d297f45711d" (UID: "0b95917b-c40b-4bb7-8064-4d297f45711d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.155953 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b95917b-c40b-4bb7-8064-4d297f45711d" (UID: "0b95917b-c40b-4bb7-8064-4d297f45711d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.192019 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc05a021-e410-4413-8e09-99db47cc4ee5" path="/var/lib/kubelet/pods/fc05a021-e410-4413-8e09-99db47cc4ee5/volumes" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.207204 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.207237 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.207252 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95917b-c40b-4bb7-8064-4d297f45711d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.207345 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzqq5\" (UniqueName: \"kubernetes.io/projected/0b95917b-c40b-4bb7-8064-4d297f45711d-kube-api-access-jzqq5\") on node \"crc\" DevicePath \"\"" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.346176 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8b53cfe-8acc-431c-be7e-b6d48ce587a8","Type":"ContainerDied","Data":"2beb6e9b98a2b0f0644358865d91fb192d2b88e77c2e2c03ed2a5c620559396d"} Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.346271 4857 scope.go:117] "RemoveContainer" containerID="8829a66b8f82391a5de78501b48d419be4736a59d5607024bbb3678f5ab6ae0b" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.346453 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.370202 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pxxc5" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.370375 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pxxc5" event={"ID":"0b95917b-c40b-4bb7-8064-4d297f45711d","Type":"ContainerDied","Data":"01dc2bb0a3b2c2b1aacba8c3205814d71c9f8c634f93b7085c3e4cfe29ae55aa"} Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.370443 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01dc2bb0a3b2c2b1aacba8c3205814d71c9f8c634f93b7085c3e4cfe29ae55aa" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.427841 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.504738 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.535966 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: E0318 14:30:59.536651 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b95917b-c40b-4bb7-8064-4d297f45711d" containerName="nova-manage" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.536667 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b95917b-c40b-4bb7-8064-4d297f45711d" containerName="nova-manage" Mar 18 14:30:59 crc kubenswrapper[4857]: E0318 14:30:59.536712 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" containerName="kube-state-metrics" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.536720 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" containerName="kube-state-metrics" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.537035 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b95917b-c40b-4bb7-8064-4d297f45711d" containerName="nova-manage" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.537058 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" containerName="kube-state-metrics" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.538041 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.538420 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.542234 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.542511 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.554841 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.578790 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.608045 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.608334 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-log" containerID="cri-o://baac5d7a84a41f29aca8b8d37c382e8a428f2044e60b2ff3ddd35e1c422a4dda" gracePeriod=30 Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.608494 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-api" containerID="cri-o://d85493ae0093621ece155b3acfda7c0e4e70781954ef86bb9231b878a01ee7c1" gracePeriod=30 Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.626658 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.626736 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.626901 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.626981 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvct5\" (UniqueName: \"kubernetes.io/projected/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-api-access-hvct5\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.635883 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.688019 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.255:8774/\": EOF" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.702005 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.255:8774/\": EOF" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.730471 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.731428 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.731620 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.731713 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvct5\" (UniqueName: \"kubernetes.io/projected/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-api-access-hvct5\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.738588 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.738733 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.742247 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.761705 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvct5\" (UniqueName: \"kubernetes.io/projected/d86ecda9-1d3b-4efe-9778-30f3f6803c11-kube-api-access-hvct5\") pod \"kube-state-metrics-0\" (UID: \"d86ecda9-1d3b-4efe-9778-30f3f6803c11\") " pod="openstack/kube-state-metrics-0" Mar 18 14:30:59 crc kubenswrapper[4857]: I0318 14:30:59.886563 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.332902 4857 scope.go:117] "RemoveContainer" containerID="9a78288c3549cdc08f8a178272dbbee32a95b8143037465ec0e2ea7ba5a20084" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.429194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerStarted","Data":"ffbbf2e8a34464b5593ae06ae82ba11d166823d7c5ab6050be2bc556cfa4d8d4"} Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.446555 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c","Type":"ContainerStarted","Data":"1b0b8dbab40c075eb74e9c0edb70c432d494e673344d9af980fc1f43838e220d"} Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.452051 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.470931 4857 generic.go:334] "Generic (PLEG): container finished" podID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerID="baac5d7a84a41f29aca8b8d37c382e8a428f2044e60b2ff3ddd35e1c422a4dda" exitCode=143 Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.470994 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerDied","Data":"baac5d7a84a41f29aca8b8d37c382e8a428f2044e60b2ff3ddd35e1c422a4dda"} Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.487076 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerStarted","Data":"1d3cd39b4af15c44ce562dee9ae9a05788ea369e29ad057032174ba1ddf5cbea"} Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.502979 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xv86z" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.503992 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xv86z" event={"ID":"fdab76a8-c643-44eb-8fe5-7fd0ab42f634","Type":"ContainerDied","Data":"3f1f38b13501dc913a7deab83524ae568d53c4e1a39001c748dfc43d60561b0e"} Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.505545 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1f38b13501dc913a7deab83524ae568d53c4e1a39001c748dfc43d60561b0e" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.600097 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle\") pod \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.600239 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts\") pod \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.600378 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data\") pod \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.600651 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slcsb\" (UniqueName: \"kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb\") pod \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\" (UID: \"fdab76a8-c643-44eb-8fe5-7fd0ab42f634\") " Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.609245 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb" (OuterVolumeSpecName: "kube-api-access-slcsb") pod "fdab76a8-c643-44eb-8fe5-7fd0ab42f634" (UID: "fdab76a8-c643-44eb-8fe5-7fd0ab42f634"). InnerVolumeSpecName "kube-api-access-slcsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.617892 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts" (OuterVolumeSpecName: "scripts") pod "fdab76a8-c643-44eb-8fe5-7fd0ab42f634" (UID: "fdab76a8-c643-44eb-8fe5-7fd0ab42f634"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.618048 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.646812 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdab76a8-c643-44eb-8fe5-7fd0ab42f634" (UID: "fdab76a8-c643-44eb-8fe5-7fd0ab42f634"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.649969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data" (OuterVolumeSpecName: "config-data") pod "fdab76a8-c643-44eb-8fe5-7fd0ab42f634" (UID: "fdab76a8-c643-44eb-8fe5-7fd0ab42f634"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.704684 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slcsb\" (UniqueName: \"kubernetes.io/projected/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-kube-api-access-slcsb\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.704773 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.704789 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:00 crc kubenswrapper[4857]: I0318 14:31:00.704800 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdab76a8-c643-44eb-8fe5-7fd0ab42f634-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.226829 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b53cfe-8acc-431c-be7e-b6d48ce587a8" path="/var/lib/kubelet/pods/e8b53cfe-8acc-431c-be7e-b6d48ce587a8/volumes" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.521068 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d86ecda9-1d3b-4efe-9778-30f3f6803c11","Type":"ContainerStarted","Data":"9c8a75e1dd7e0911bbb520873140b7dd597edc62e43d95d63198c6251cb0d5a0"} Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.529785 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerName="nova-scheduler-scheduler" containerID="cri-o://b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" gracePeriod=30 Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.530251 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c","Type":"ContainerStarted","Data":"731b93e0c8c508c3d0ba49d5bebe719b3387a9a0978bb32d5b46ce1c93112c26"} Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.573098 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.469804373 podStartE2EDuration="3.573075207s" podCreationTimestamp="2026-03-18 14:30:58 +0000 UTC" firstStartedPulling="2026-03-18 14:30:59.70109502 +0000 UTC m=+1843.830223477" lastFinishedPulling="2026-03-18 14:31:00.804365854 +0000 UTC m=+1844.933494311" observedRunningTime="2026-03-18 14:31:01.557434714 +0000 UTC m=+1845.686563171" watchObservedRunningTime="2026-03-18 14:31:01.573075207 +0000 UTC m=+1845.702203664" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.595895 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 14:31:01 crc kubenswrapper[4857]: E0318 14:31:01.596511 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab76a8-c643-44eb-8fe5-7fd0ab42f634" containerName="nova-cell1-conductor-db-sync" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.596583 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab76a8-c643-44eb-8fe5-7fd0ab42f634" containerName="nova-cell1-conductor-db-sync" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.596859 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdab76a8-c643-44eb-8fe5-7fd0ab42f634" containerName="nova-cell1-conductor-db-sync" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.597777 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.612948 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.651829 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.758030 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfl4n\" (UniqueName: \"kubernetes.io/projected/50e494dd-1112-4b7e-b816-50a04847f133-kube-api-access-jfl4n\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.758085 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.758255 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.860516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfl4n\" (UniqueName: \"kubernetes.io/projected/50e494dd-1112-4b7e-b816-50a04847f133-kube-api-access-jfl4n\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.860562 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.860772 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.867810 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.868272 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e494dd-1112-4b7e-b816-50a04847f133-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.883725 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfl4n\" (UniqueName: \"kubernetes.io/projected/50e494dd-1112-4b7e-b816-50a04847f133-kube-api-access-jfl4n\") pod \"nova-cell1-conductor-0\" (UID: \"50e494dd-1112-4b7e-b816-50a04847f133\") " pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:01 crc kubenswrapper[4857]: I0318 14:31:01.940072 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:02 crc kubenswrapper[4857]: I0318 14:31:02.624150 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d86ecda9-1d3b-4efe-9778-30f3f6803c11","Type":"ContainerStarted","Data":"a58185310d0c867c23e717d129f357fdcbfb43d907bb858d888b8f82b212188c"} Mar 18 14:31:02 crc kubenswrapper[4857]: I0318 14:31:02.626135 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 18 14:31:02 crc kubenswrapper[4857]: I0318 14:31:02.650452 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerStarted","Data":"6a6130f48a31b00a20dfd0aeac7ca132b7b86ae0bfeec09eb628ebe5067bcf79"} Mar 18 14:31:02 crc kubenswrapper[4857]: I0318 14:31:02.673302 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.260306683 podStartE2EDuration="3.673277554s" podCreationTimestamp="2026-03-18 14:30:59 +0000 UTC" firstStartedPulling="2026-03-18 14:31:00.684054936 +0000 UTC m=+1844.813183383" lastFinishedPulling="2026-03-18 14:31:01.097025797 +0000 UTC m=+1845.226154254" observedRunningTime="2026-03-18 14:31:02.671112649 +0000 UTC m=+1846.800241106" watchObservedRunningTime="2026-03-18 14:31:02.673277554 +0000 UTC m=+1846.802406011" Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.158453 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 14:31:03 crc kubenswrapper[4857]: W0318 14:31:03.170314 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50e494dd_1112_4b7e_b816_50a04847f133.slice/crio-9d10ec02aed4a01eedde04d592cdcffedd7b3cf426e1d28fc8a7ba7003679afc WatchSource:0}: Error finding container 9d10ec02aed4a01eedde04d592cdcffedd7b3cf426e1d28fc8a7ba7003679afc: Status 404 returned error can't find the container with id 9d10ec02aed4a01eedde04d592cdcffedd7b3cf426e1d28fc8a7ba7003679afc Mar 18 14:31:03 crc kubenswrapper[4857]: E0318 14:31:03.288014 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:03 crc kubenswrapper[4857]: E0318 14:31:03.294264 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:03 crc kubenswrapper[4857]: E0318 14:31:03.303344 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:03 crc kubenswrapper[4857]: E0318 14:31:03.303446 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerName="nova-scheduler-scheduler" Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.671762 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"50e494dd-1112-4b7e-b816-50a04847f133","Type":"ContainerStarted","Data":"4adb05481828fe57cf63ea1fbc9aabf4775440ae0670b7cfadcb014bf859131a"} Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.671818 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"50e494dd-1112-4b7e-b816-50a04847f133","Type":"ContainerStarted","Data":"9d10ec02aed4a01eedde04d592cdcffedd7b3cf426e1d28fc8a7ba7003679afc"} Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.671894 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.679410 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerStarted","Data":"7612da7d2714414ac3998eb393098cd03bf0f7bcedfebef6761665c8154f3f8f"} Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.679594 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-api" containerID="cri-o://0d4e079ecbde55db9302ed1a05cc3f9be891140a6926c3c6fa1c5b673519c03e" gracePeriod=30 Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.679900 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-listener" containerID="cri-o://7612da7d2714414ac3998eb393098cd03bf0f7bcedfebef6761665c8154f3f8f" gracePeriod=30 Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.679987 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-notifier" containerID="cri-o://ffbbf2e8a34464b5593ae06ae82ba11d166823d7c5ab6050be2bc556cfa4d8d4" gracePeriod=30 Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.680088 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-evaluator" containerID="cri-o://b3c68814516d501e9473d8643b8af560c6808e109b3d09aaf1b86447cb4eaf51" gracePeriod=30 Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.701394 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerStarted","Data":"7d7617c352986d1ce168e65f6318113eb7f7c94cc20a27e27a408231e26afb3e"} Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.709350 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.709324926 podStartE2EDuration="2.709324926s" podCreationTimestamp="2026-03-18 14:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:03.697242232 +0000 UTC m=+1847.826370689" watchObservedRunningTime="2026-03-18 14:31:03.709324926 +0000 UTC m=+1847.838453383" Mar 18 14:31:03 crc kubenswrapper[4857]: I0318 14:31:03.733005 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=6.805473223 podStartE2EDuration="17.732980201s" podCreationTimestamp="2026-03-18 14:30:46 +0000 UTC" firstStartedPulling="2026-03-18 14:30:51.459904772 +0000 UTC m=+1835.589033229" lastFinishedPulling="2026-03-18 14:31:02.38741175 +0000 UTC m=+1846.516540207" observedRunningTime="2026-03-18 14:31:03.727205586 +0000 UTC m=+1847.856334053" watchObservedRunningTime="2026-03-18 14:31:03.732980201 +0000 UTC m=+1847.862108648" Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.729559 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerID="ffbbf2e8a34464b5593ae06ae82ba11d166823d7c5ab6050be2bc556cfa4d8d4" exitCode=0 Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.730231 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerID="b3c68814516d501e9473d8643b8af560c6808e109b3d09aaf1b86447cb4eaf51" exitCode=0 Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.730244 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerID="0d4e079ecbde55db9302ed1a05cc3f9be891140a6926c3c6fa1c5b673519c03e" exitCode=0 Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.730895 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerDied","Data":"ffbbf2e8a34464b5593ae06ae82ba11d166823d7c5ab6050be2bc556cfa4d8d4"} Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.730959 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerDied","Data":"b3c68814516d501e9473d8643b8af560c6808e109b3d09aaf1b86447cb4eaf51"} Mar 18 14:31:04 crc kubenswrapper[4857]: I0318 14:31:04.730976 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerDied","Data":"0d4e079ecbde55db9302ed1a05cc3f9be891140a6926c3c6fa1c5b673519c03e"} Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.747421 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerStarted","Data":"9aba6860f137599b4fa4e79e78883f6550edac68f38e88de2b4a991c900a2d69"} Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.747559 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-central-agent" containerID="cri-o://1d3cd39b4af15c44ce562dee9ae9a05788ea369e29ad057032174ba1ddf5cbea" gracePeriod=30 Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.747605 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="proxy-httpd" containerID="cri-o://9aba6860f137599b4fa4e79e78883f6550edac68f38e88de2b4a991c900a2d69" gracePeriod=30 Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.747677 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-notification-agent" containerID="cri-o://6a6130f48a31b00a20dfd0aeac7ca132b7b86ae0bfeec09eb628ebe5067bcf79" gracePeriod=30 Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.747709 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="sg-core" containerID="cri-o://7d7617c352986d1ce168e65f6318113eb7f7c94cc20a27e27a408231e26afb3e" gracePeriod=30 Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.748326 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:31:05 crc kubenswrapper[4857]: I0318 14:31:05.794480 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.038958533 podStartE2EDuration="9.794452407s" podCreationTimestamp="2026-03-18 14:30:56 +0000 UTC" firstStartedPulling="2026-03-18 14:30:57.390184177 +0000 UTC m=+1841.519312634" lastFinishedPulling="2026-03-18 14:31:05.145678051 +0000 UTC m=+1849.274806508" observedRunningTime="2026-03-18 14:31:05.79139949 +0000 UTC m=+1849.920527947" watchObservedRunningTime="2026-03-18 14:31:05.794452407 +0000 UTC m=+1849.923580864" Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.775578 4857 generic.go:334] "Generic (PLEG): container finished" podID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerID="b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" exitCode=0 Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.775635 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfe4db16-d4dc-4222-87d6-71dc331417d5","Type":"ContainerDied","Data":"b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5"} Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.775905 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfe4db16-d4dc-4222-87d6-71dc331417d5","Type":"ContainerDied","Data":"21fe3b714472b065251859a2e29e2e86d98bc7acb4cd8f4802a15ab9ac9d508d"} Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.775923 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21fe3b714472b065251859a2e29e2e86d98bc7acb4cd8f4802a15ab9ac9d508d" Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.783546 4857 generic.go:334] "Generic (PLEG): container finished" podID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerID="7d7617c352986d1ce168e65f6318113eb7f7c94cc20a27e27a408231e26afb3e" exitCode=2 Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.783589 4857 generic.go:334] "Generic (PLEG): container finished" podID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerID="6a6130f48a31b00a20dfd0aeac7ca132b7b86ae0bfeec09eb628ebe5067bcf79" exitCode=0 Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.783628 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerDied","Data":"7d7617c352986d1ce168e65f6318113eb7f7c94cc20a27e27a408231e26afb3e"} Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.783661 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerDied","Data":"6a6130f48a31b00a20dfd0aeac7ca132b7b86ae0bfeec09eb628ebe5067bcf79"} Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.818604 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.993253 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data\") pod \"bfe4db16-d4dc-4222-87d6-71dc331417d5\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.993553 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle\") pod \"bfe4db16-d4dc-4222-87d6-71dc331417d5\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " Mar 18 14:31:06 crc kubenswrapper[4857]: I0318 14:31:06.993863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvg4s\" (UniqueName: \"kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s\") pod \"bfe4db16-d4dc-4222-87d6-71dc331417d5\" (UID: \"bfe4db16-d4dc-4222-87d6-71dc331417d5\") " Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.004613 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s" (OuterVolumeSpecName: "kube-api-access-rvg4s") pod "bfe4db16-d4dc-4222-87d6-71dc331417d5" (UID: "bfe4db16-d4dc-4222-87d6-71dc331417d5"). InnerVolumeSpecName "kube-api-access-rvg4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.030882 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfe4db16-d4dc-4222-87d6-71dc331417d5" (UID: "bfe4db16-d4dc-4222-87d6-71dc331417d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.062352 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data" (OuterVolumeSpecName: "config-data") pod "bfe4db16-d4dc-4222-87d6-71dc331417d5" (UID: "bfe4db16-d4dc-4222-87d6-71dc331417d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.097405 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.097442 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvg4s\" (UniqueName: \"kubernetes.io/projected/bfe4db16-d4dc-4222-87d6-71dc331417d5-kube-api-access-rvg4s\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.097456 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfe4db16-d4dc-4222-87d6-71dc331417d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.799150 4857 generic.go:334] "Generic (PLEG): container finished" podID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerID="d85493ae0093621ece155b3acfda7c0e4e70781954ef86bb9231b878a01ee7c1" exitCode=0 Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.799567 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:31:07 crc kubenswrapper[4857]: I0318 14:31:07.799562 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerDied","Data":"d85493ae0093621ece155b3acfda7c0e4e70781954ef86bb9231b878a01ee7c1"} Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.109272 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.132465 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.154943 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:08 crc kubenswrapper[4857]: E0318 14:31:08.156054 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerName="nova-scheduler-scheduler" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.156082 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerName="nova-scheduler-scheduler" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.156581 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" containerName="nova-scheduler-scheduler" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.158137 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.164978 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.198904 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.267349 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.267495 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.267764 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7c9v\" (UniqueName: \"kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.369614 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.369743 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.369916 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7c9v\" (UniqueName: \"kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.375511 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.376055 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.380628 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.390679 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7c9v\" (UniqueName: \"kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v\") pod \"nova-scheduler-0\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.867216 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.869500 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data\") pod \"f77372b5-5bbb-4110-9366-b13feb8eb77d\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.869598 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle\") pod \"f77372b5-5bbb-4110-9366-b13feb8eb77d\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.869675 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66hx\" (UniqueName: \"kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx\") pod \"f77372b5-5bbb-4110-9366-b13feb8eb77d\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.904813 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs\") pod \"f77372b5-5bbb-4110-9366-b13feb8eb77d\" (UID: \"f77372b5-5bbb-4110-9366-b13feb8eb77d\") " Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.905745 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs" (OuterVolumeSpecName: "logs") pod "f77372b5-5bbb-4110-9366-b13feb8eb77d" (UID: "f77372b5-5bbb-4110-9366-b13feb8eb77d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.906468 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f77372b5-5bbb-4110-9366-b13feb8eb77d-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.907958 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx" (OuterVolumeSpecName: "kube-api-access-x66hx") pod "f77372b5-5bbb-4110-9366-b13feb8eb77d" (UID: "f77372b5-5bbb-4110-9366-b13feb8eb77d"). InnerVolumeSpecName "kube-api-access-x66hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.942969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f77372b5-5bbb-4110-9366-b13feb8eb77d" (UID: "f77372b5-5bbb-4110-9366-b13feb8eb77d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.944023 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f77372b5-5bbb-4110-9366-b13feb8eb77d","Type":"ContainerDied","Data":"0155e9a54aa7211c5b99bcda827941507cd2fd7f13fcfdcd96092be101ec8f39"} Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.944118 4857 scope.go:117] "RemoveContainer" containerID="d85493ae0093621ece155b3acfda7c0e4e70781954ef86bb9231b878a01ee7c1" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.944292 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:08 crc kubenswrapper[4857]: I0318 14:31:08.977154 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data" (OuterVolumeSpecName: "config-data") pod "f77372b5-5bbb-4110-9366-b13feb8eb77d" (UID: "f77372b5-5bbb-4110-9366-b13feb8eb77d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.018213 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.018249 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77372b5-5bbb-4110-9366-b13feb8eb77d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.018264 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66hx\" (UniqueName: \"kubernetes.io/projected/f77372b5-5bbb-4110-9366-b13feb8eb77d-kube-api-access-x66hx\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.179830 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe4db16-d4dc-4222-87d6-71dc331417d5" path="/var/lib/kubelet/pods/bfe4db16-d4dc-4222-87d6-71dc331417d5/volumes" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.242174 4857 scope.go:117] "RemoveContainer" containerID="baac5d7a84a41f29aca8b8d37c382e8a428f2044e60b2ff3ddd35e1c422a4dda" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.430877 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.472044 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.505910 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:09 crc kubenswrapper[4857]: E0318 14:31:09.506875 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-log" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.506902 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-log" Mar 18 14:31:09 crc kubenswrapper[4857]: E0318 14:31:09.506937 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-api" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.506946 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-api" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.507345 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-log" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.507381 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" containerName="nova-api-api" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.509222 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.519814 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.529322 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.657159 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.658558 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k79dp\" (UniqueName: \"kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.658686 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.658803 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.658836 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.761635 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.761686 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.761876 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k79dp\" (UniqueName: \"kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.761960 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.762491 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.768678 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.769198 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.781480 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k79dp\" (UniqueName: \"kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp\") pod \"nova-api-0\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.865827 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.906883 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.988555 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a91e5d19-143f-43ca-8f9c-1a6ff39226bd","Type":"ContainerStarted","Data":"2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1"} Mar 18 14:31:09 crc kubenswrapper[4857]: I0318 14:31:09.988600 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a91e5d19-143f-43ca-8f9c-1a6ff39226bd","Type":"ContainerStarted","Data":"df7fc8675dee6b47d28437a6731f8f3874572e2af649d089c14d4aa9dfbad113"} Mar 18 14:31:10 crc kubenswrapper[4857]: I0318 14:31:10.029916 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.029892211 podStartE2EDuration="2.029892211s" podCreationTimestamp="2026-03-18 14:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:10.024233348 +0000 UTC m=+1854.153361805" watchObservedRunningTime="2026-03-18 14:31:10.029892211 +0000 UTC m=+1854.159020668" Mar 18 14:31:10 crc kubenswrapper[4857]: I0318 14:31:10.549106 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:11 crc kubenswrapper[4857]: I0318 14:31:11.007414 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerStarted","Data":"bba88419f71654d0b2e3a3ae1185e9d1075e47096aa26c8774ad7975ad4234f0"} Mar 18 14:31:11 crc kubenswrapper[4857]: I0318 14:31:11.007468 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerStarted","Data":"0b2d99c81a1f7840887f3ce227e54cfd164273a792f6cba13b156b417f312300"} Mar 18 14:31:11 crc kubenswrapper[4857]: I0318 14:31:11.168892 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:31:11 crc kubenswrapper[4857]: E0318 14:31:11.169448 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:31:11 crc kubenswrapper[4857]: I0318 14:31:11.180095 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77372b5-5bbb-4110-9366-b13feb8eb77d" path="/var/lib/kubelet/pods/f77372b5-5bbb-4110-9366-b13feb8eb77d/volumes" Mar 18 14:31:11 crc kubenswrapper[4857]: I0318 14:31:11.976388 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 18 14:31:12 crc kubenswrapper[4857]: I0318 14:31:12.354852 4857 generic.go:334] "Generic (PLEG): container finished" podID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerID="1d3cd39b4af15c44ce562dee9ae9a05788ea369e29ad057032174ba1ddf5cbea" exitCode=0 Mar 18 14:31:12 crc kubenswrapper[4857]: I0318 14:31:12.355194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerDied","Data":"1d3cd39b4af15c44ce562dee9ae9a05788ea369e29ad057032174ba1ddf5cbea"} Mar 18 14:31:12 crc kubenswrapper[4857]: I0318 14:31:12.376390 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerStarted","Data":"08578350050cb0dd78a45cc22785fa4c14c09a3262907a84df685906699b2f16"} Mar 18 14:31:12 crc kubenswrapper[4857]: I0318 14:31:12.407587 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.407546253 podStartE2EDuration="3.407546253s" podCreationTimestamp="2026-03-18 14:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:12.396798143 +0000 UTC m=+1856.525926600" watchObservedRunningTime="2026-03-18 14:31:12.407546253 +0000 UTC m=+1856.536674710" Mar 18 14:31:13 crc kubenswrapper[4857]: I0318 14:31:13.867997 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 14:31:18 crc kubenswrapper[4857]: I0318 14:31:18.867896 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 14:31:18 crc kubenswrapper[4857]: I0318 14:31:18.909595 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 14:31:19 crc kubenswrapper[4857]: I0318 14:31:19.539619 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 14:31:19 crc kubenswrapper[4857]: I0318 14:31:19.866139 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:31:19 crc kubenswrapper[4857]: I0318 14:31:19.866456 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:31:20 crc kubenswrapper[4857]: I0318 14:31:20.948051 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.10:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:20 crc kubenswrapper[4857]: I0318 14:31:20.948057 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.10:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.690638 4857 generic.go:334] "Generic (PLEG): container finished" podID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerID="a7d683d95b64acb6b301dbe1fa93c50e58aa6c41884ed74726f5a30aba531446" exitCode=137 Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.690723 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerDied","Data":"a7d683d95b64acb6b301dbe1fa93c50e58aa6c41884ed74726f5a30aba531446"} Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.694289 4857 generic.go:334] "Generic (PLEG): container finished" podID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" containerID="a2e0bcd7972be629675ee676f80564735633bd51ab5e82050b9dbb6880b10950" exitCode=137 Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.694311 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e0b9685-6239-42b2-8e7c-c9b29baa81de","Type":"ContainerDied","Data":"a2e0bcd7972be629675ee676f80564735633bd51ab5e82050b9dbb6880b10950"} Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.896598 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.901983 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.968028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs\") pod \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.968489 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data\") pod \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.968571 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data\") pod \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.968703 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle\") pod \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.968970 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle\") pod \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.969028 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zwgz\" (UniqueName: \"kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz\") pod \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\" (UID: \"8e0b9685-6239-42b2-8e7c-c9b29baa81de\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.969112 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdf2w\" (UniqueName: \"kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w\") pod \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\" (UID: \"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d\") " Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.969825 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs" (OuterVolumeSpecName: "logs") pod "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" (UID: "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.970623 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.977525 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz" (OuterVolumeSpecName: "kube-api-access-9zwgz") pod "8e0b9685-6239-42b2-8e7c-c9b29baa81de" (UID: "8e0b9685-6239-42b2-8e7c-c9b29baa81de"). InnerVolumeSpecName "kube-api-access-9zwgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:22 crc kubenswrapper[4857]: I0318 14:31:22.978076 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w" (OuterVolumeSpecName: "kube-api-access-cdf2w") pod "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" (UID: "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d"). InnerVolumeSpecName "kube-api-access-cdf2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.010683 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data" (OuterVolumeSpecName: "config-data") pod "8e0b9685-6239-42b2-8e7c-c9b29baa81de" (UID: "8e0b9685-6239-42b2-8e7c-c9b29baa81de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.011406 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data" (OuterVolumeSpecName: "config-data") pod "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" (UID: "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.020355 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e0b9685-6239-42b2-8e7c-c9b29baa81de" (UID: "8e0b9685-6239-42b2-8e7c-c9b29baa81de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.021786 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" (UID: "49fc8b84-824e-416b-8cb2-d92ec8ff2d0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.072999 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.073032 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.073051 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.073062 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b9685-6239-42b2-8e7c-c9b29baa81de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.073071 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zwgz\" (UniqueName: \"kubernetes.io/projected/8e0b9685-6239-42b2-8e7c-c9b29baa81de-kube-api-access-9zwgz\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.073080 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdf2w\" (UniqueName: \"kubernetes.io/projected/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d-kube-api-access-cdf2w\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.710148 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8e0b9685-6239-42b2-8e7c-c9b29baa81de","Type":"ContainerDied","Data":"437a1c3efc8be5d313573d0dc16ca66a2feecd8ffa95dcab6b538346117ccced"} Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.710225 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.711413 4857 scope.go:117] "RemoveContainer" containerID="a2e0bcd7972be629675ee676f80564735633bd51ab5e82050b9dbb6880b10950" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.713663 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"49fc8b84-824e-416b-8cb2-d92ec8ff2d0d","Type":"ContainerDied","Data":"8c5035fa4dfd43385020568c8c6be88f789f76bf0955026bbc964aad1ad399b3"} Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.713676 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.750076 4857 scope.go:117] "RemoveContainer" containerID="a7d683d95b64acb6b301dbe1fa93c50e58aa6c41884ed74726f5a30aba531446" Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.751417 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.768623 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:23 crc kubenswrapper[4857]: I0318 14:31:23.785257 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.032710 4857 scope.go:117] "RemoveContainer" containerID="702d7bce8e59324645dd4740ced3e9950afaade8f68e44c2ea5847a3ca65bcc3" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.060474 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.099846 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: E0318 14:31:24.100772 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-log" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.100801 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-log" Mar 18 14:31:24 crc kubenswrapper[4857]: E0318 14:31:24.100824 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-metadata" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.100833 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-metadata" Mar 18 14:31:24 crc kubenswrapper[4857]: E0318 14:31:24.100870 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.100881 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.101224 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-log" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.101256 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.101281 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" containerName="nova-metadata-metadata" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.103199 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.106660 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.107270 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.115137 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.117695 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.133606 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.134266 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.134622 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.158990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wv57\" (UniqueName: \"kubernetes.io/projected/221ab7cd-f76f-4e82-bc62-54fd96aacde6-kube-api-access-6wv57\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.159313 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.159493 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.159548 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.160367 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.175856 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:31:24 crc kubenswrapper[4857]: E0318 14:31:24.176414 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.177521 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.200639 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274534 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274650 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274800 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274831 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffzf\" (UniqueName: \"kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274852 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274885 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.274961 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wv57\" (UniqueName: \"kubernetes.io/projected/221ab7cd-f76f-4e82-bc62-54fd96aacde6-kube-api-access-6wv57\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.275021 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.275090 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.275128 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.281485 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.282133 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.282230 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.285308 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/221ab7cd-f76f-4e82-bc62-54fd96aacde6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.300505 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wv57\" (UniqueName: \"kubernetes.io/projected/221ab7cd-f76f-4e82-bc62-54fd96aacde6-kube-api-access-6wv57\") pod \"nova-cell1-novncproxy-0\" (UID: \"221ab7cd-f76f-4e82-bc62-54fd96aacde6\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.377301 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.378075 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.378474 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.378673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.378887 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.378988 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lffzf\" (UniqueName: \"kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.387160 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.387309 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.387403 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.397118 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lffzf\" (UniqueName: \"kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf\") pod \"nova-metadata-0\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " pod="openstack/nova-metadata-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.660837 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:24 crc kubenswrapper[4857]: I0318 14:31:24.664508 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.445562 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fc8b84-824e-416b-8cb2-d92ec8ff2d0d" path="/var/lib/kubelet/pods/49fc8b84-824e-416b-8cb2-d92ec8ff2d0d/volumes" Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.446949 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0b9685-6239-42b2-8e7c-c9b29baa81de" path="/var/lib/kubelet/pods/8e0b9685-6239-42b2-8e7c-c9b29baa81de/volumes" Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.447767 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.717887 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.784793 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerStarted","Data":"42861d943a454332a15f1262d50fba590e0d81b5596ef1e676dad70146d5aa9e"} Mar 18 14:31:25 crc kubenswrapper[4857]: I0318 14:31:25.786735 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"221ab7cd-f76f-4e82-bc62-54fd96aacde6","Type":"ContainerStarted","Data":"b1b6116a0378920768434a5f9e310f9cf2d3eeb179dab2e7f848b8319af54864"} Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.738237 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.806544 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerStarted","Data":"ae5dc5c70100a69c4aaa260f8519de51840c43678dbd604f15724733f5cc52a4"} Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.806606 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerStarted","Data":"88ed517977dceea3813f5a89539270a2469523a9dbc9632683985daca7dd3e7c"} Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.808956 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"221ab7cd-f76f-4e82-bc62-54fd96aacde6","Type":"ContainerStarted","Data":"9848211c6a417a964565a60395dd721d33e014646a701101d1090952135a0b16"} Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.839262 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.8392344019999998 podStartE2EDuration="3.839234402s" podCreationTimestamp="2026-03-18 14:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:26.836612026 +0000 UTC m=+1870.965740473" watchObservedRunningTime="2026-03-18 14:31:26.839234402 +0000 UTC m=+1870.968362859" Mar 18 14:31:26 crc kubenswrapper[4857]: I0318 14:31:26.875425 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.875398722 podStartE2EDuration="3.875398722s" podCreationTimestamp="2026-03-18 14:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:26.872965721 +0000 UTC m=+1871.002094188" watchObservedRunningTime="2026-03-18 14:31:26.875398722 +0000 UTC m=+1871.004527179" Mar 18 14:31:27 crc kubenswrapper[4857]: I0318 14:31:27.911221 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:31:27 crc kubenswrapper[4857]: I0318 14:31:27.912252 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:31:29 crc kubenswrapper[4857]: I0318 14:31:29.663953 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.368950 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.369033 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.387827 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.387915 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.631841 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.634551 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:30 crc kubenswrapper[4857]: I0318 14:31:30.642630 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.139985 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.140603 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zrr\" (UniqueName: \"kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.141618 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.141925 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.142038 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.142182 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.244823 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.245178 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5zrr\" (UniqueName: \"kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.245479 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.245942 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.246109 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.246399 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.248900 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.249506 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.250676 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.252742 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.256881 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.277777 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5zrr\" (UniqueName: \"kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr\") pod \"dnsmasq-dns-6b7bbf7cf9-8xgbg\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:31 crc kubenswrapper[4857]: I0318 14:31:31.457470 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:32 crc kubenswrapper[4857]: I0318 14:31:32.067478 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:31:32 crc kubenswrapper[4857]: I0318 14:31:32.372628 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" event={"ID":"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb","Type":"ContainerStarted","Data":"bf4439adf5d1d428f75367ebeb4f3d61569b7678ac26da8a11406d671e3d3760"} Mar 18 14:31:33 crc kubenswrapper[4857]: I0318 14:31:33.387309 4857 generic.go:334] "Generic (PLEG): container finished" podID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerID="61753e083ecf47dc3ca44cf0f7780de11fa06dbea2a807dba3a1591a04682646" exitCode=0 Mar 18 14:31:33 crc kubenswrapper[4857]: I0318 14:31:33.387413 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" event={"ID":"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb","Type":"ContainerDied","Data":"61753e083ecf47dc3ca44cf0f7780de11fa06dbea2a807dba3a1591a04682646"} Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.443991 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" event={"ID":"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb","Type":"ContainerStarted","Data":"da3607bc8acd0f0ae6f0ede898fa5a0856f6943433c0bcd939454ac94f4e60e9"} Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.446632 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.450499 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerID="7612da7d2714414ac3998eb393098cd03bf0f7bcedfebef6761665c8154f3f8f" exitCode=137 Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.450555 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerDied","Data":"7612da7d2714414ac3998eb393098cd03bf0f7bcedfebef6761665c8154f3f8f"} Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.505088 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" podStartSLOduration=4.50506683 podStartE2EDuration="4.50506683s" podCreationTimestamp="2026-03-18 14:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:34.472658685 +0000 UTC m=+1878.601787132" watchObservedRunningTime="2026-03-18 14:31:34.50506683 +0000 UTC m=+1878.634195287" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.556451 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.559058 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-log" containerID="cri-o://bba88419f71654d0b2e3a3ae1185e9d1075e47096aa26c8774ad7975ad4234f0" gracePeriod=30 Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.559271 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-api" containerID="cri-o://08578350050cb0dd78a45cc22785fa4c14c09a3262907a84df685906699b2f16" gracePeriod=30 Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.665181 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.665605 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.665623 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.729599 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:34 crc kubenswrapper[4857]: I0318 14:31:34.757998 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.151478 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data\") pod \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.151603 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle\") pod \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.151668 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbvm\" (UniqueName: \"kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm\") pod \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.151738 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts\") pod \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\" (UID: \"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4\") " Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.165038 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts" (OuterVolumeSpecName: "scripts") pod "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" (UID: "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.165118 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm" (OuterVolumeSpecName: "kube-api-access-9bbvm") pod "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" (UID: "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4"). InnerVolumeSpecName "kube-api-access-9bbvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.259962 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbvm\" (UniqueName: \"kubernetes.io/projected/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-kube-api-access-9bbvm\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.260005 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.440498 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" (UID: "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.449885 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data" (OuterVolumeSpecName: "config-data") pod "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" (UID: "c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.466721 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.466765 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.469917 4857 generic.go:334] "Generic (PLEG): container finished" podID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerID="bba88419f71654d0b2e3a3ae1185e9d1075e47096aa26c8774ad7975ad4234f0" exitCode=143 Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.469998 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerDied","Data":"bba88419f71654d0b2e3a3ae1185e9d1075e47096aa26c8774ad7975ad4234f0"} Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.487414 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4","Type":"ContainerDied","Data":"870e8035908c7b736cb607c62d05afab8b3423a6ce2f0592f0898ea036099004"} Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.498200 4857 scope.go:117] "RemoveContainer" containerID="7612da7d2714414ac3998eb393098cd03bf0f7bcedfebef6761665c8154f3f8f" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.502911 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.822292 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.863379 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.955550 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 18 14:31:35 crc kubenswrapper[4857]: I0318 14:31:35.957490 4857 scope.go:117] "RemoveContainer" containerID="ffbbf2e8a34464b5593ae06ae82ba11d166823d7c5ab6050be2bc556cfa4d8d4" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.082173 4857 scope.go:117] "RemoveContainer" containerID="b3c68814516d501e9473d8643b8af560c6808e109b3d09aaf1b86447cb4eaf51" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.129097 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.153806 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.169777 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:31:36 crc kubenswrapper[4857]: E0318 14:31:36.170130 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.193977 4857 scope.go:117] "RemoveContainer" containerID="0d4e079ecbde55db9302ed1a05cc3f9be891140a6926c3c6fa1c5b673519c03e" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.194586 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 18 14:31:36 crc kubenswrapper[4857]: E0318 14:31:36.195182 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-notifier" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195200 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-notifier" Mar 18 14:31:36 crc kubenswrapper[4857]: E0318 14:31:36.195234 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-api" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195242 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-api" Mar 18 14:31:36 crc kubenswrapper[4857]: E0318 14:31:36.195279 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-evaluator" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195285 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-evaluator" Mar 18 14:31:36 crc kubenswrapper[4857]: E0318 14:31:36.195303 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-listener" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195309 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-listener" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195607 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-evaluator" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195639 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-api" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195652 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-notifier" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.195666 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" containerName="aodh-listener" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.221817 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.221934 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.231843 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.232053 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.232184 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.232402 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-fvfqd" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.242434 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.621982 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.622125 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.622249 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.622279 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.622317 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t424v\" (UniqueName: \"kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.622482 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.727696 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.727825 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.727856 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.727889 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t424v\" (UniqueName: \"kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.727988 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.728366 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.730862 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-c7gq9"] Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.732527 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.737190 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.737353 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.739689 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.740272 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.742230 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.759683 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c7gq9"] Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.767145 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.767453 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.797484 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t424v\" (UniqueName: \"kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v\") pod \"aodh-0\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.830912 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.830980 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.831175 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.831206 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6vnl\" (UniqueName: \"kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.870818 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.933956 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.934002 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6vnl\" (UniqueName: \"kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.934199 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.934238 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.946631 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.947335 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.947391 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:36 crc kubenswrapper[4857]: I0318 14:31:36.960914 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6vnl\" (UniqueName: \"kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl\") pod \"nova-cell1-cell-mapping-c7gq9\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:37 crc kubenswrapper[4857]: I0318 14:31:37.078444 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:37 crc kubenswrapper[4857]: I0318 14:31:37.245359 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4" path="/var/lib/kubelet/pods/c4b8a2e6-f003-40aa-bca2-bcb80d71e4b4/volumes" Mar 18 14:31:37 crc kubenswrapper[4857]: I0318 14:31:37.504279 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:31:37 crc kubenswrapper[4857]: W0318 14:31:37.512386 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod343e2b57_18ae_4935_95c3_2cedf23db40d.slice/crio-73dae512f03fbc5f6ef83f64bf8768e16e3bee197cbde410d9fbdc54121fe3b9 WatchSource:0}: Error finding container 73dae512f03fbc5f6ef83f64bf8768e16e3bee197cbde410d9fbdc54121fe3b9: Status 404 returned error can't find the container with id 73dae512f03fbc5f6ef83f64bf8768e16e3bee197cbde410d9fbdc54121fe3b9 Mar 18 14:31:38 crc kubenswrapper[4857]: I0318 14:31:38.035238 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerStarted","Data":"73dae512f03fbc5f6ef83f64bf8768e16e3bee197cbde410d9fbdc54121fe3b9"} Mar 18 14:31:38 crc kubenswrapper[4857]: I0318 14:31:38.068163 4857 generic.go:334] "Generic (PLEG): container finished" podID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerID="9aba6860f137599b4fa4e79e78883f6550edac68f38e88de2b4a991c900a2d69" exitCode=137 Mar 18 14:31:38 crc kubenswrapper[4857]: I0318 14:31:38.068243 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerDied","Data":"9aba6860f137599b4fa4e79e78883f6550edac68f38e88de2b4a991c900a2d69"} Mar 18 14:31:38 crc kubenswrapper[4857]: I0318 14:31:38.242654 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c7gq9"] Mar 18 14:31:38 crc kubenswrapper[4857]: W0318 14:31:38.254059 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fb2ffad_1202_49a7_8129_1ce2ca433b2c.slice/crio-455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6 WatchSource:0}: Error finding container 455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6: Status 404 returned error can't find the container with id 455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6 Mar 18 14:31:38 crc kubenswrapper[4857]: I0318 14:31:38.556509 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.020321 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.020723 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.021002 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.021041 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.021115 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.021147 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2fsc\" (UniqueName: \"kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.021175 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts\") pod \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\" (UID: \"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0\") " Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.024882 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.024980 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.035734 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.035770 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.099511 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts" (OuterVolumeSpecName: "scripts") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.100957 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc" (OuterVolumeSpecName: "kube-api-access-r2fsc") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "kube-api-access-r2fsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.122431 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c7gq9" event={"ID":"1fb2ffad-1202-49a7-8129-1ce2ca433b2c","Type":"ContainerStarted","Data":"455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6"} Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.129304 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.129673 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0","Type":"ContainerDied","Data":"43d97d51cbbe91e25828c6440d45e720def5f84feddc4dceaa2c979b203e9992"} Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.129741 4857 scope.go:117] "RemoveContainer" containerID="9aba6860f137599b4fa4e79e78883f6550edac68f38e88de2b4a991c900a2d69" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.139042 4857 generic.go:334] "Generic (PLEG): container finished" podID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerID="08578350050cb0dd78a45cc22785fa4c14c09a3262907a84df685906699b2f16" exitCode=0 Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.139113 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerDied","Data":"08578350050cb0dd78a45cc22785fa4c14c09a3262907a84df685906699b2f16"} Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.142028 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2fsc\" (UniqueName: \"kubernetes.io/projected/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-kube-api-access-r2fsc\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.142072 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.732156 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:39 crc kubenswrapper[4857]: I0318 14:31:39.746938 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.130969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.144104 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data" (OuterVolumeSpecName: "config-data") pod "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" (UID: "e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.160890 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.160935 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.579564 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-c7gq9" podStartSLOduration=4.579538797 podStartE2EDuration="4.579538797s" podCreationTimestamp="2026-03-18 14:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:40.54986824 +0000 UTC m=+1884.678996697" watchObservedRunningTime="2026-03-18 14:31:40.579538797 +0000 UTC m=+1884.708667254" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.599378 4857 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.436s" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.599563 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c7gq9" event={"ID":"1fb2ffad-1202-49a7-8129-1ce2ca433b2c","Type":"ContainerStarted","Data":"01f565ea7b203e65660c00471b9a10748263428433993345f589cdf26c537c11"} Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.599589 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3269d9d-72f5-4efb-85ac-fa784abd1d05","Type":"ContainerDied","Data":"0b2d99c81a1f7840887f3ce227e54cfd164273a792f6cba13b156b417f312300"} Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.599609 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b2d99c81a1f7840887f3ce227e54cfd164273a792f6cba13b156b417f312300" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.650191 4857 scope.go:117] "RemoveContainer" containerID="7d7617c352986d1ce168e65f6318113eb7f7c94cc20a27e27a408231e26afb3e" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.676309 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.676684 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.689951 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.718839 4857 scope.go:117] "RemoveContainer" containerID="6a6130f48a31b00a20dfd0aeac7ca132b7b86ae0bfeec09eb628ebe5067bcf79" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.779880 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.780848 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-api" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.780933 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-api" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.781010 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="sg-core" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.781073 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="sg-core" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.781184 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-central-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.781274 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-central-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.781350 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-notification-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.781420 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-notification-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.781480 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="proxy-httpd" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.781540 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="proxy-httpd" Mar 18 14:31:40 crc kubenswrapper[4857]: E0318 14:31:40.781615 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-log" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.781680 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-log" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782017 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-api" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782106 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-central-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782183 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="proxy-httpd" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782261 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="ceilometer-notification-agent" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782328 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" containerName="nova-api-log" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.782392 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" containerName="sg-core" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.784870 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.795855 4857 scope.go:117] "RemoveContainer" containerID="1d3cd39b4af15c44ce562dee9ae9a05788ea369e29ad057032174ba1ddf5cbea" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.803725 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.803973 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.804185 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.836876 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.840175 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs\") pod \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.848425 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k79dp\" (UniqueName: \"kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp\") pod \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.848630 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle\") pod \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.848949 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data\") pod \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\" (UID: \"c3269d9d-72f5-4efb-85ac-fa784abd1d05\") " Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.843302 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs" (OuterVolumeSpecName: "logs") pod "c3269d9d-72f5-4efb-85ac-fa784abd1d05" (UID: "c3269d9d-72f5-4efb-85ac-fa784abd1d05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.851445 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3269d9d-72f5-4efb-85ac-fa784abd1d05-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.873097 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp" (OuterVolumeSpecName: "kube-api-access-k79dp") pod "c3269d9d-72f5-4efb-85ac-fa784abd1d05" (UID: "c3269d9d-72f5-4efb-85ac-fa784abd1d05"). InnerVolumeSpecName "kube-api-access-k79dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.968335 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.968569 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.968733 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.979990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.980207 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kfr\" (UniqueName: \"kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.980396 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.980622 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.980701 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:40 crc kubenswrapper[4857]: I0318 14:31:40.981153 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k79dp\" (UniqueName: \"kubernetes.io/projected/c3269d9d-72f5-4efb-85ac-fa784abd1d05-kube-api-access-k79dp\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.002925 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3269d9d-72f5-4efb-85ac-fa784abd1d05" (UID: "c3269d9d-72f5-4efb-85ac-fa784abd1d05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.065734 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data" (OuterVolumeSpecName: "config-data") pod "c3269d9d-72f5-4efb-85ac-fa784abd1d05" (UID: "c3269d9d-72f5-4efb-85ac-fa784abd1d05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084617 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084733 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084804 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4kfr\" (UniqueName: \"kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084857 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084940 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.084960 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.085035 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.085056 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.085149 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.085161 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3269d9d-72f5-4efb-85ac-fa784abd1d05-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.091052 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.094242 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.095219 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.104550 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.106905 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.107256 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.111298 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.120055 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4kfr\" (UniqueName: \"kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr\") pod \"ceilometer-0\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: E0318 14:31:41.149386 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cad3b5_4f89_4a0e_8521_ceb2565ed9e0.slice/crio-43d97d51cbbe91e25828c6440d45e720def5f84feddc4dceaa2c979b203e9992\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cad3b5_4f89_4a0e_8521_ceb2565ed9e0.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.150371 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.531615 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.545013 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0" path="/var/lib/kubelet/pods/e3cad3b5-4f89-4a0e-8521-ceb2565ed9e0/volumes" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.568561 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.568618 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerStarted","Data":"7cc28e84be6f9b23535bf1612ea4d9789f303bb0bf04ee415092192c0b3c2dd2"} Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.793949 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.794528 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="dnsmasq-dns" containerID="cri-o://305b51e65d227539188c8f938554bdd396d7221363cb9dfda589f97ed5f7713e" gracePeriod=10 Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.923279 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.964818 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:41 crc kubenswrapper[4857]: I0318 14:31:41.992430 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:41.998455 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.007322 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.007667 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.007970 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.008621 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.117309 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.117403 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.117462 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.117735 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ptl\" (UniqueName: \"kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.122489 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.122766 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.225589 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ptl\" (UniqueName: \"kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.225683 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.227816 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.228096 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.228167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.228212 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.229516 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.237813 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.239042 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.249295 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.257449 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.272548 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ptl\" (UniqueName: \"kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.272601 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: W0318 14:31:42.285004 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb273f162_887e_4d5f_8fa5_6be8fec441d2.slice/crio-ab56cc8b35adfe2f684f3c7eac8ee6a71b8d0e6af1db22e1f16554293be64cd9 WatchSource:0}: Error finding container ab56cc8b35adfe2f684f3c7eac8ee6a71b8d0e6af1db22e1f16554293be64cd9: Status 404 returned error can't find the container with id ab56cc8b35adfe2f684f3c7eac8ee6a71b8d0e6af1db22e1f16554293be64cd9 Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.352767 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.572143 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerStarted","Data":"ab56cc8b35adfe2f684f3c7eac8ee6a71b8d0e6af1db22e1f16554293be64cd9"} Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.596056 4857 generic.go:334] "Generic (PLEG): container finished" podID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerID="305b51e65d227539188c8f938554bdd396d7221363cb9dfda589f97ed5f7713e" exitCode=0 Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.596129 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" event={"ID":"96e4143b-24b7-4dcd-a77c-42c89a55eea7","Type":"ContainerDied","Data":"305b51e65d227539188c8f938554bdd396d7221363cb9dfda589f97ed5f7713e"} Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.599292 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerStarted","Data":"e42e23a9e6663fd00168c3b8b1bd4c9c5deaed813117796fe02823e255b40b61"} Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.605979 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.638960 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.639026 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.639124 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw5fb\" (UniqueName: \"kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.639152 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.639237 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.639272 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc\") pod \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\" (UID: \"96e4143b-24b7-4dcd-a77c-42c89a55eea7\") " Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.665358 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.666845 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.683574 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb" (OuterVolumeSpecName: "kube-api-access-gw5fb") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "kube-api-access-gw5fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.726639 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config" (OuterVolumeSpecName: "config") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.743306 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw5fb\" (UniqueName: \"kubernetes.io/projected/96e4143b-24b7-4dcd-a77c-42c89a55eea7-kube-api-access-gw5fb\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.743350 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.752025 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.755236 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.772266 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.790666 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "96e4143b-24b7-4dcd-a77c-42c89a55eea7" (UID: "96e4143b-24b7-4dcd-a77c-42c89a55eea7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.848248 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.848799 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.848913 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:42 crc kubenswrapper[4857]: I0318 14:31:42.849009 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/96e4143b-24b7-4dcd-a77c-42c89a55eea7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.104663 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.209186 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3269d9d-72f5-4efb-85ac-fa784abd1d05" path="/var/lib/kubelet/pods/c3269d9d-72f5-4efb-85ac-fa784abd1d05/volumes" Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.657024 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerStarted","Data":"07799de065169839aff047e593ffca18d07f6d7c36968f77332c8e04d74b9958"} Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.657091 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerStarted","Data":"48762965cf2ee183c979fd441a8f1113905f3638ba4c86accc2414960e96ff3b"} Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.678509 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerStarted","Data":"dd3296fad301f43c8c768b86035d0e6162d083d8f944cb005583834f0257d7c7"} Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.683609 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerStarted","Data":"be83fea8b330c84bfe4905c84779a1906ba92656b455d8c6c513455cc0fb668f"} Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.685850 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" event={"ID":"96e4143b-24b7-4dcd-a77c-42c89a55eea7","Type":"ContainerDied","Data":"35cc88714d1336c1cd9722debd15d46528cf35f36f383c595f5ddd4de04f4faf"} Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.685913 4857 scope.go:117] "RemoveContainer" containerID="305b51e65d227539188c8f938554bdd396d7221363cb9dfda589f97ed5f7713e" Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.686301 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-82vv5" Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.791980 4857 scope.go:117] "RemoveContainer" containerID="bd461810cd5f1e28c1afed5289713d3b1e5055b946713d183b6d63186ed04cbb" Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.827935 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:31:43 crc kubenswrapper[4857]: I0318 14:31:43.845267 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-82vv5"] Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.676863 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.693092 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.706713 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerStarted","Data":"6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3"} Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.715178 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerStarted","Data":"166c5ec08e7a1e68f138018b11f4e563da7009781cdd1bce6e2a39ba75d2d83e"} Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.739642 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerStarted","Data":"0007509499b4654a972b18d6d44b6cc05396c8cf92055398aaad39b4245844c8"} Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.754641 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.758497 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.804400 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.804368246 podStartE2EDuration="3.804368246s" podCreationTimestamp="2026-03-18 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:31:44.766736598 +0000 UTC m=+1888.895865055" watchObservedRunningTime="2026-03-18 14:31:44.804368246 +0000 UTC m=+1888.933496723" Mar 18 14:31:44 crc kubenswrapper[4857]: I0318 14:31:44.860123 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.663371084 podStartE2EDuration="8.860095259s" podCreationTimestamp="2026-03-18 14:31:36 +0000 UTC" firstStartedPulling="2026-03-18 14:31:37.521440434 +0000 UTC m=+1881.650568891" lastFinishedPulling="2026-03-18 14:31:43.718164609 +0000 UTC m=+1887.847293066" observedRunningTime="2026-03-18 14:31:44.826617836 +0000 UTC m=+1888.955746293" watchObservedRunningTime="2026-03-18 14:31:44.860095259 +0000 UTC m=+1888.989223726" Mar 18 14:31:45 crc kubenswrapper[4857]: I0318 14:31:45.180293 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" path="/var/lib/kubelet/pods/96e4143b-24b7-4dcd-a77c-42c89a55eea7/volumes" Mar 18 14:31:45 crc kubenswrapper[4857]: I0318 14:31:45.230149 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:31:47 crc kubenswrapper[4857]: I0318 14:31:47.177920 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:31:47 crc kubenswrapper[4857]: E0318 14:31:47.178849 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:31:47 crc kubenswrapper[4857]: I0318 14:31:47.808860 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerStarted","Data":"f08d9cfc7184fdb65071f4e091ca90939001db2e9118d0f0503961d938f45d98"} Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.287983 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerStarted","Data":"573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e"} Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.288616 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.288339 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-central-agent" containerID="cri-o://be83fea8b330c84bfe4905c84779a1906ba92656b455d8c6c513455cc0fb668f" gracePeriod=30 Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.288850 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="proxy-httpd" containerID="cri-o://573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e" gracePeriod=30 Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.288964 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-notification-agent" containerID="cri-o://0007509499b4654a972b18d6d44b6cc05396c8cf92055398aaad39b4245844c8" gracePeriod=30 Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.289039 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="sg-core" containerID="cri-o://f08d9cfc7184fdb65071f4e091ca90939001db2e9118d0f0503961d938f45d98" gracePeriod=30 Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.328962 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.163849383 podStartE2EDuration="12.328937912s" podCreationTimestamp="2026-03-18 14:31:40 +0000 UTC" firstStartedPulling="2026-03-18 14:31:42.296534086 +0000 UTC m=+1886.425662543" lastFinishedPulling="2026-03-18 14:31:51.461622615 +0000 UTC m=+1895.590751072" observedRunningTime="2026-03-18 14:31:52.326162442 +0000 UTC m=+1896.455290899" watchObservedRunningTime="2026-03-18 14:31:52.328937912 +0000 UTC m=+1896.458066369" Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.353814 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:31:52 crc kubenswrapper[4857]: I0318 14:31:52.353875 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.304359 4857 generic.go:334] "Generic (PLEG): container finished" podID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerID="f08d9cfc7184fdb65071f4e091ca90939001db2e9118d0f0503961d938f45d98" exitCode=2 Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.304736 4857 generic.go:334] "Generic (PLEG): container finished" podID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerID="0007509499b4654a972b18d6d44b6cc05396c8cf92055398aaad39b4245844c8" exitCode=0 Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.304437 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerDied","Data":"f08d9cfc7184fdb65071f4e091ca90939001db2e9118d0f0503961d938f45d98"} Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.304805 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerDied","Data":"0007509499b4654a972b18d6d44b6cc05396c8cf92055398aaad39b4245844c8"} Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.307556 4857 generic.go:334] "Generic (PLEG): container finished" podID="1fb2ffad-1202-49a7-8129-1ce2ca433b2c" containerID="01f565ea7b203e65660c00471b9a10748263428433993345f589cdf26c537c11" exitCode=0 Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.307607 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c7gq9" event={"ID":"1fb2ffad-1202-49a7-8129-1ce2ca433b2c","Type":"ContainerDied","Data":"01f565ea7b203e65660c00471b9a10748263428433993345f589cdf26c537c11"} Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.368128 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:53 crc kubenswrapper[4857]: I0318 14:31:53.368135 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:55 crc kubenswrapper[4857]: I0318 14:31:55.642917 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" podUID="bf950907-821d-4d28-a563-f9865d7df7f0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:31:55 crc kubenswrapper[4857]: I0318 14:31:55.964913 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.125163 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data\") pod \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.125208 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6vnl\" (UniqueName: \"kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl\") pod \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.125510 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle\") pod \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.125560 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts\") pod \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\" (UID: \"1fb2ffad-1202-49a7-8129-1ce2ca433b2c\") " Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.139983 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl" (OuterVolumeSpecName: "kube-api-access-b6vnl") pod "1fb2ffad-1202-49a7-8129-1ce2ca433b2c" (UID: "1fb2ffad-1202-49a7-8129-1ce2ca433b2c"). InnerVolumeSpecName "kube-api-access-b6vnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.144284 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts" (OuterVolumeSpecName: "scripts") pod "1fb2ffad-1202-49a7-8129-1ce2ca433b2c" (UID: "1fb2ffad-1202-49a7-8129-1ce2ca433b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.193128 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fb2ffad-1202-49a7-8129-1ce2ca433b2c" (UID: "1fb2ffad-1202-49a7-8129-1ce2ca433b2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.205043 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data" (OuterVolumeSpecName: "config-data") pod "1fb2ffad-1202-49a7-8129-1ce2ca433b2c" (UID: "1fb2ffad-1202-49a7-8129-1ce2ca433b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.229330 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.229374 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.229389 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6vnl\" (UniqueName: \"kubernetes.io/projected/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-kube-api-access-b6vnl\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.229406 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fb2ffad-1202-49a7-8129-1ce2ca433b2c-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.582238 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c7gq9" event={"ID":"1fb2ffad-1202-49a7-8129-1ce2ca433b2c","Type":"ContainerDied","Data":"455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6"} Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.582282 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455dad43757e319b7033ae3670dfb9a9a92e3f019095a2ec7554cc85161787e6" Mar 18 14:31:56 crc kubenswrapper[4857]: I0318 14:31:56.582377 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c7gq9" Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.213347 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.214036 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-log" containerID="cri-o://07799de065169839aff047e593ffca18d07f6d7c36968f77332c8e04d74b9958" gracePeriod=30 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.214562 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-api" containerID="cri-o://6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3" gracePeriod=30 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.229882 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.230148 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerName="nova-scheduler-scheduler" containerID="cri-o://2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" gracePeriod=30 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.258522 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.258925 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" containerID="cri-o://88ed517977dceea3813f5a89539270a2469523a9dbc9632683985daca7dd3e7c" gracePeriod=30 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.259952 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" containerID="cri-o://ae5dc5c70100a69c4aaa260f8519de51840c43678dbd604f15724733f5cc52a4" gracePeriod=30 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.896712 4857 generic.go:334] "Generic (PLEG): container finished" podID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerID="88ed517977dceea3813f5a89539270a2469523a9dbc9632683985daca7dd3e7c" exitCode=143 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.896818 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerDied","Data":"88ed517977dceea3813f5a89539270a2469523a9dbc9632683985daca7dd3e7c"} Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.903910 4857 generic.go:334] "Generic (PLEG): container finished" podID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerID="be83fea8b330c84bfe4905c84779a1906ba92656b455d8c6c513455cc0fb668f" exitCode=0 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.903947 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerDied","Data":"be83fea8b330c84bfe4905c84779a1906ba92656b455d8c6c513455cc0fb668f"} Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.906448 4857 generic.go:334] "Generic (PLEG): container finished" podID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerID="07799de065169839aff047e593ffca18d07f6d7c36968f77332c8e04d74b9958" exitCode=143 Mar 18 14:31:57 crc kubenswrapper[4857]: I0318 14:31:57.906491 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerDied","Data":"07799de065169839aff047e593ffca18d07f6d7c36968f77332c8e04d74b9958"} Mar 18 14:31:58 crc kubenswrapper[4857]: E0318 14:31:58.870868 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:58 crc kubenswrapper[4857]: E0318 14:31:58.873129 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:58 crc kubenswrapper[4857]: E0318 14:31:58.874863 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 14:31:58 crc kubenswrapper[4857]: E0318 14:31:58.874928 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerName="nova-scheduler-scheduler" Mar 18 14:31:59 crc kubenswrapper[4857]: I0318 14:31:59.936474 4857 generic.go:334] "Generic (PLEG): container finished" podID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerID="2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" exitCode=0 Mar 18 14:31:59 crc kubenswrapper[4857]: I0318 14:31:59.936562 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a91e5d19-143f-43ca-8f9c-1a6ff39226bd","Type":"ContainerDied","Data":"2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1"} Mar 18 14:31:59 crc kubenswrapper[4857]: I0318 14:31:59.936805 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a91e5d19-143f-43ca-8f9c-1a6ff39226bd","Type":"ContainerDied","Data":"df7fc8675dee6b47d28437a6731f8f3874572e2af649d089c14d4aa9dfbad113"} Mar 18 14:31:59 crc kubenswrapper[4857]: I0318 14:31:59.936818 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7fc8675dee6b47d28437a6731f8f3874572e2af649d089c14d4aa9dfbad113" Mar 18 14:31:59 crc kubenswrapper[4857]: I0318 14:31:59.959679 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.000385 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle\") pod \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.000578 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data\") pod \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.000712 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7c9v\" (UniqueName: \"kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v\") pod \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\" (UID: \"a91e5d19-143f-43ca-8f9c-1a6ff39226bd\") " Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.007747 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v" (OuterVolumeSpecName: "kube-api-access-v7c9v") pod "a91e5d19-143f-43ca-8f9c-1a6ff39226bd" (UID: "a91e5d19-143f-43ca-8f9c-1a6ff39226bd"). InnerVolumeSpecName "kube-api-access-v7c9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.052534 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a91e5d19-143f-43ca-8f9c-1a6ff39226bd" (UID: "a91e5d19-143f-43ca-8f9c-1a6ff39226bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.067348 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data" (OuterVolumeSpecName: "config-data") pod "a91e5d19-143f-43ca-8f9c-1a6ff39226bd" (UID: "a91e5d19-143f-43ca-8f9c-1a6ff39226bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.116248 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7c9v\" (UniqueName: \"kubernetes.io/projected/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-kube-api-access-v7c9v\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.116295 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.116309 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a91e5d19-143f-43ca-8f9c-1a6ff39226bd-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.156368 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564072-rcj95"] Mar 18 14:32:00 crc kubenswrapper[4857]: E0318 14:32:00.157201 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb2ffad-1202-49a7-8129-1ce2ca433b2c" containerName="nova-manage" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157231 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb2ffad-1202-49a7-8129-1ce2ca433b2c" containerName="nova-manage" Mar 18 14:32:00 crc kubenswrapper[4857]: E0318 14:32:00.157262 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="dnsmasq-dns" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157273 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="dnsmasq-dns" Mar 18 14:32:00 crc kubenswrapper[4857]: E0318 14:32:00.157292 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="init" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157300 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="init" Mar 18 14:32:00 crc kubenswrapper[4857]: E0318 14:32:00.157371 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerName="nova-scheduler-scheduler" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157382 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerName="nova-scheduler-scheduler" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157722 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" containerName="nova-scheduler-scheduler" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157788 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb2ffad-1202-49a7-8129-1ce2ca433b2c" containerName="nova-manage" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.157814 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e4143b-24b7-4dcd-a77c-42c89a55eea7" containerName="dnsmasq-dns" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.160121 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.162698 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.162979 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.163104 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.165430 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:32:00 crc kubenswrapper[4857]: E0318 14:32:00.166347 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.173231 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564072-rcj95"] Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.218066 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk2bj\" (UniqueName: \"kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj\") pod \"auto-csr-approver-29564072-rcj95\" (UID: \"472573ed-cb00-48c2-b290-adb0a4f69739\") " pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.319671 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk2bj\" (UniqueName: \"kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj\") pod \"auto-csr-approver-29564072-rcj95\" (UID: \"472573ed-cb00-48c2-b290-adb0a4f69739\") " pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.337372 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk2bj\" (UniqueName: \"kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj\") pod \"auto-csr-approver-29564072-rcj95\" (UID: \"472573ed-cb00-48c2-b290-adb0a4f69739\") " pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.354428 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.354497 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.490695 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.752642 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": read tcp 10.217.0.2:51984->10.217.1.11:8775: read: connection reset by peer" Mar 18 14:32:00 crc kubenswrapper[4857]: I0318 14:32:00.752974 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.11:8775/\": read tcp 10.217.0.2:51970->10.217.1.11:8775: read: connection reset by peer" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.028919 4857 scope.go:117] "RemoveContainer" containerID="4c3d93b778fe19f2a7d569bb60fd9222a0383034cc78733875215cbc024ade3a" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.066937 4857 generic.go:334] "Generic (PLEG): container finished" podID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerID="ae5dc5c70100a69c4aaa260f8519de51840c43678dbd604f15724733f5cc52a4" exitCode=0 Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.067040 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerDied","Data":"ae5dc5c70100a69c4aaa260f8519de51840c43678dbd604f15724733f5cc52a4"} Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.067087 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.139910 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564072-rcj95"] Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.141147 4857 scope.go:117] "RemoveContainer" containerID="6d753f6109529497f751c11d6a669fafc3bb1d671dc0ae02f2fc754c6d5e54be" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.199434 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.199485 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.199509 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.201920 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.204787 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.225068 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.329741 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-config-data\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.329851 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxl28\" (UniqueName: \"kubernetes.io/projected/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-kube-api-access-kxl28\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.330118 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.432838 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxl28\" (UniqueName: \"kubernetes.io/projected/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-kube-api-access-kxl28\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.433081 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.433331 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-config-data\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.441551 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-config-data\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.442160 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.458501 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxl28\" (UniqueName: \"kubernetes.io/projected/edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47-kube-api-access-kxl28\") pod \"nova-scheduler-0\" (UID: \"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47\") " pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.524577 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.714538 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.847086 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs\") pod \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.847322 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs\") pod \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.847556 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle\") pod \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.847627 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lffzf\" (UniqueName: \"kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf\") pod \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.847673 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data\") pod \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\" (UID: \"37fe0738-5c0c-4ef2-ab98-0f54202f2648\") " Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.858583 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs" (OuterVolumeSpecName: "logs") pod "37fe0738-5c0c-4ef2-ab98-0f54202f2648" (UID: "37fe0738-5c0c-4ef2-ab98-0f54202f2648"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.891044 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf" (OuterVolumeSpecName: "kube-api-access-lffzf") pod "37fe0738-5c0c-4ef2-ab98-0f54202f2648" (UID: "37fe0738-5c0c-4ef2-ab98-0f54202f2648"). InnerVolumeSpecName "kube-api-access-lffzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.941939 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data" (OuterVolumeSpecName: "config-data") pod "37fe0738-5c0c-4ef2-ab98-0f54202f2648" (UID: "37fe0738-5c0c-4ef2-ab98-0f54202f2648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.982794 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lffzf\" (UniqueName: \"kubernetes.io/projected/37fe0738-5c0c-4ef2-ab98-0f54202f2648-kube-api-access-lffzf\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.982836 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:01 crc kubenswrapper[4857]: I0318 14:32:01.982847 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37fe0738-5c0c-4ef2-ab98-0f54202f2648-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.132204 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564072-rcj95" event={"ID":"472573ed-cb00-48c2-b290-adb0a4f69739","Type":"ContainerStarted","Data":"7143926c5a2e612db272ad2516f3678909b42c4377b99c646e0463fc6a38f625"} Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.154147 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.161664 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37fe0738-5c0c-4ef2-ab98-0f54202f2648","Type":"ContainerDied","Data":"42861d943a454332a15f1262d50fba590e0d81b5596ef1e676dad70146d5aa9e"} Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.161775 4857 scope.go:117] "RemoveContainer" containerID="ae5dc5c70100a69c4aaa260f8519de51840c43678dbd604f15724733f5cc52a4" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.162031 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.177952 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37fe0738-5c0c-4ef2-ab98-0f54202f2648" (UID: "37fe0738-5c0c-4ef2-ab98-0f54202f2648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.193011 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "37fe0738-5c0c-4ef2-ab98-0f54202f2648" (UID: "37fe0738-5c0c-4ef2-ab98-0f54202f2648"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:02 crc kubenswrapper[4857]: W0318 14:32:02.196628 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedcbb6bb_f0dc_4a1b_8bdc_0941cb35dc47.slice/crio-0e4d8ac5b740fb86eba63ffcfa0f5a711f498e8469dec9fbd60e3bf9057b45b0 WatchSource:0}: Error finding container 0e4d8ac5b740fb86eba63ffcfa0f5a711f498e8469dec9fbd60e3bf9057b45b0: Status 404 returned error can't find the container with id 0e4d8ac5b740fb86eba63ffcfa0f5a711f498e8469dec9fbd60e3bf9057b45b0 Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.211402 4857 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.213013 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37fe0738-5c0c-4ef2-ab98-0f54202f2648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.223174 4857 scope.go:117] "RemoveContainer" containerID="88ed517977dceea3813f5a89539270a2469523a9dbc9632683985daca7dd3e7c" Mar 18 14:32:02 crc kubenswrapper[4857]: E0318 14:32:02.348287 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87be9c38_2c92_4d01_8278_6bf4a87c3520.slice/crio-6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.635724 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.654599 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.676677 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:32:02 crc kubenswrapper[4857]: E0318 14:32:02.677419 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.677446 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" Mar 18 14:32:02 crc kubenswrapper[4857]: E0318 14:32:02.677471 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.677479 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.677763 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-metadata" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.677802 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" containerName="nova-metadata-log" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.679473 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.683344 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.684303 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.708643 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.835889 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-logs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.835992 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.836131 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-config-data\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.836385 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.836527 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52k64\" (UniqueName: \"kubernetes.io/projected/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-kube-api-access-52k64\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.939548 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-logs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.939607 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.939677 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-config-data\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.939810 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.939907 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52k64\" (UniqueName: \"kubernetes.io/projected/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-kube-api-access-52k64\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.940393 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-logs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.947090 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.950347 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-config-data\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.960351 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52k64\" (UniqueName: \"kubernetes.io/projected/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-kube-api-access-52k64\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:02 crc kubenswrapper[4857]: I0318 14:32:02.968978 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/019058fb-aa78-4be6-9d60-ebe5a0ce7b67-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"019058fb-aa78-4be6-9d60-ebe5a0ce7b67\") " pod="openstack/nova-metadata-0" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.002391 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.190468 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fe0738-5c0c-4ef2-ab98-0f54202f2648" path="/var/lib/kubelet/pods/37fe0738-5c0c-4ef2-ab98-0f54202f2648/volumes" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.191685 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91e5d19-143f-43ca-8f9c-1a6ff39226bd" path="/var/lib/kubelet/pods/a91e5d19-143f-43ca-8f9c-1a6ff39226bd/volumes" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.200764 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47","Type":"ContainerStarted","Data":"e6d90eea1dfa8dfc138ddefbfce9c5adc3fd39c5d748decc85f1c136b9a35e35"} Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.200839 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47","Type":"ContainerStarted","Data":"0e4d8ac5b740fb86eba63ffcfa0f5a711f498e8469dec9fbd60e3bf9057b45b0"} Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.205564 4857 generic.go:334] "Generic (PLEG): container finished" podID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerID="6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3" exitCode=0 Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.205631 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerDied","Data":"6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3"} Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.236032 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.235998078 podStartE2EDuration="2.235998078s" podCreationTimestamp="2026-03-18 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:32:03.218474217 +0000 UTC m=+1907.347602674" watchObservedRunningTime="2026-03-18 14:32:03.235998078 +0000 UTC m=+1907.365126535" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.390432 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.508369 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ptl\" (UniqueName: \"kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.508588 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.508722 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.508879 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.508918 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.509052 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data\") pod \"87be9c38-2c92-4d01-8278-6bf4a87c3520\" (UID: \"87be9c38-2c92-4d01-8278-6bf4a87c3520\") " Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.514269 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs" (OuterVolumeSpecName: "logs") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.518264 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl" (OuterVolumeSpecName: "kube-api-access-z4ptl") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "kube-api-access-z4ptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.574569 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data" (OuterVolumeSpecName: "config-data") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.597079 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:03 crc kubenswrapper[4857]: I0318 14:32:03.607906 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.105115 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.107879 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.108327 4857 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87be9c38-2c92-4d01-8278-6bf4a87c3520-logs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.108341 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.108357 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ptl\" (UniqueName: \"kubernetes.io/projected/87be9c38-2c92-4d01-8278-6bf4a87c3520-kube-api-access-z4ptl\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.137794 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87be9c38-2c92-4d01-8278-6bf4a87c3520" (UID: "87be9c38-2c92-4d01-8278-6bf4a87c3520"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.173499 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.212127 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87be9c38-2c92-4d01-8278-6bf4a87c3520-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.229010 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"019058fb-aa78-4be6-9d60-ebe5a0ce7b67","Type":"ContainerStarted","Data":"bdb037143122c53ee456f31825b48384e5cdeeaedbef841e6cb9ce83a0b75d74"} Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.231613 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87be9c38-2c92-4d01-8278-6bf4a87c3520","Type":"ContainerDied","Data":"48762965cf2ee183c979fd441a8f1113905f3638ba4c86accc2414960e96ff3b"} Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.231612 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.231660 4857 scope.go:117] "RemoveContainer" containerID="6894b5068a4c73446861f0e5bf0a36c6fbe46c0c95fdf64db0430ccda97bf4c3" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.308881 4857 scope.go:117] "RemoveContainer" containerID="07799de065169839aff047e593ffca18d07f6d7c36968f77332c8e04d74b9958" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.331154 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.342276 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.361836 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 14:32:04 crc kubenswrapper[4857]: E0318 14:32:04.362779 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-api" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.362812 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-api" Mar 18 14:32:04 crc kubenswrapper[4857]: E0318 14:32:04.362848 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-log" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.362861 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-log" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.363289 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-log" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.363314 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" containerName="nova-api-api" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.365237 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.369675 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.370462 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.370633 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.378586 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.521847 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-logs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.521990 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cpl5\" (UniqueName: \"kubernetes.io/projected/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-kube-api-access-6cpl5\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.522159 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.522252 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-config-data\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.522371 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.522404 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.624889 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-config-data\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.625247 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.625273 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.625407 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-logs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.625471 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cpl5\" (UniqueName: \"kubernetes.io/projected/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-kube-api-access-6cpl5\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.625538 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.626624 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-logs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.629389 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.630274 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.630770 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-config-data\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.630774 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.643255 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cpl5\" (UniqueName: \"kubernetes.io/projected/c4cd7203-ecc0-4c47-abd4-de4a574f24ba-kube-api-access-6cpl5\") pod \"nova-api-0\" (UID: \"c4cd7203-ecc0-4c47-abd4-de4a574f24ba\") " pod="openstack/nova-api-0" Mar 18 14:32:04 crc kubenswrapper[4857]: I0318 14:32:04.697942 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 14:32:05 crc kubenswrapper[4857]: I0318 14:32:05.230299 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87be9c38-2c92-4d01-8278-6bf4a87c3520" path="/var/lib/kubelet/pods/87be9c38-2c92-4d01-8278-6bf4a87c3520/volumes" Mar 18 14:32:05 crc kubenswrapper[4857]: I0318 14:32:05.261104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"019058fb-aa78-4be6-9d60-ebe5a0ce7b67","Type":"ContainerStarted","Data":"ecd1e7d5ebc1c561d06f1f8d69138ed87858189b8d7971df8d9777a816219f77"} Mar 18 14:32:05 crc kubenswrapper[4857]: I0318 14:32:05.954415 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.279368 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cd7203-ecc0-4c47-abd4-de4a574f24ba","Type":"ContainerStarted","Data":"fa41110e2bfcbc799d8f384249a0e5b6216da317a09575f6386c33c762416694"} Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.282493 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"019058fb-aa78-4be6-9d60-ebe5a0ce7b67","Type":"ContainerStarted","Data":"4f84bc0924c5dc6a15b1589e5ccd557a9c0078c7f65b2255ad2038573d54e65f"} Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.286489 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564072-rcj95" event={"ID":"472573ed-cb00-48c2-b290-adb0a4f69739","Type":"ContainerStarted","Data":"e064293d27d95c5430bf540f491605545fdcc216699b5ead83cb106b78936929"} Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.325890 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.325854362 podStartE2EDuration="4.325854362s" podCreationTimestamp="2026-03-18 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:32:06.322050166 +0000 UTC m=+1910.451178623" watchObservedRunningTime="2026-03-18 14:32:06.325854362 +0000 UTC m=+1910.454982819" Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.354544 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564072-rcj95" podStartSLOduration=4.021640989 podStartE2EDuration="6.354513533s" podCreationTimestamp="2026-03-18 14:32:00 +0000 UTC" firstStartedPulling="2026-03-18 14:32:01.161042838 +0000 UTC m=+1905.290171295" lastFinishedPulling="2026-03-18 14:32:03.493915392 +0000 UTC m=+1907.623043839" observedRunningTime="2026-03-18 14:32:06.351956599 +0000 UTC m=+1910.481085056" watchObservedRunningTime="2026-03-18 14:32:06.354513533 +0000 UTC m=+1910.483641990" Mar 18 14:32:06 crc kubenswrapper[4857]: I0318 14:32:06.526531 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 14:32:07 crc kubenswrapper[4857]: I0318 14:32:07.316179 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cd7203-ecc0-4c47-abd4-de4a574f24ba","Type":"ContainerStarted","Data":"22410db3e004e5939931d1bde23552cbb03f663fbe59bf4a47abe16c29a0a049"} Mar 18 14:32:07 crc kubenswrapper[4857]: I0318 14:32:07.316459 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cd7203-ecc0-4c47-abd4-de4a574f24ba","Type":"ContainerStarted","Data":"9654b5e55a5a1102731d15e9c8a161eda438e4d428a3d56f0f6b5e06433418d7"} Mar 18 14:32:08 crc kubenswrapper[4857]: I0318 14:32:08.624543 4857 generic.go:334] "Generic (PLEG): container finished" podID="472573ed-cb00-48c2-b290-adb0a4f69739" containerID="e064293d27d95c5430bf540f491605545fdcc216699b5ead83cb106b78936929" exitCode=0 Mar 18 14:32:08 crc kubenswrapper[4857]: I0318 14:32:08.624662 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564072-rcj95" event={"ID":"472573ed-cb00-48c2-b290-adb0a4f69739","Type":"ContainerDied","Data":"e064293d27d95c5430bf540f491605545fdcc216699b5ead83cb106b78936929"} Mar 18 14:32:08 crc kubenswrapper[4857]: I0318 14:32:08.654719 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.654700106 podStartE2EDuration="4.654700106s" podCreationTimestamp="2026-03-18 14:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:32:07.345108144 +0000 UTC m=+1911.474236611" watchObservedRunningTime="2026-03-18 14:32:08.654700106 +0000 UTC m=+1912.783828563" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.170267 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.535952 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk2bj\" (UniqueName: \"kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj\") pod \"472573ed-cb00-48c2-b290-adb0a4f69739\" (UID: \"472573ed-cb00-48c2-b290-adb0a4f69739\") " Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.560456 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj" (OuterVolumeSpecName: "kube-api-access-rk2bj") pod "472573ed-cb00-48c2-b290-adb0a4f69739" (UID: "472573ed-cb00-48c2-b290-adb0a4f69739"). InnerVolumeSpecName "kube-api-access-rk2bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.649966 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk2bj\" (UniqueName: \"kubernetes.io/projected/472573ed-cb00-48c2-b290-adb0a4f69739-kube-api-access-rk2bj\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.665340 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564072-rcj95" event={"ID":"472573ed-cb00-48c2-b290-adb0a4f69739","Type":"ContainerDied","Data":"7143926c5a2e612db272ad2516f3678909b42c4377b99c646e0463fc6a38f625"} Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.665418 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7143926c5a2e612db272ad2516f3678909b42c4377b99c646e0463fc6a38f625" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.665683 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564072-rcj95" Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.738220 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564066-5lpxl"] Mar 18 14:32:10 crc kubenswrapper[4857]: I0318 14:32:10.750678 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564066-5lpxl"] Mar 18 14:32:11 crc kubenswrapper[4857]: I0318 14:32:11.167339 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 14:32:11 crc kubenswrapper[4857]: I0318 14:32:11.180814 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab24ef5b-3d16-4324-93e3-8e127478a489" path="/var/lib/kubelet/pods/ab24ef5b-3d16-4324-93e3-8e127478a489/volumes" Mar 18 14:32:11 crc kubenswrapper[4857]: I0318 14:32:11.729170 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 14:32:11 crc kubenswrapper[4857]: I0318 14:32:11.780642 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 14:32:12 crc kubenswrapper[4857]: I0318 14:32:12.814920 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 14:32:13 crc kubenswrapper[4857]: I0318 14:32:13.002698 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 14:32:13 crc kubenswrapper[4857]: I0318 14:32:13.003535 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 14:32:14 crc kubenswrapper[4857]: I0318 14:32:14.015919 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="019058fb-aa78-4be6-9d60-ebe5a0ce7b67" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.20:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:32:14 crc kubenswrapper[4857]: I0318 14:32:14.015924 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="019058fb-aa78-4be6-9d60-ebe5a0ce7b67" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.20:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:32:15 crc kubenswrapper[4857]: I0318 14:32:15.302997 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:32:15 crc kubenswrapper[4857]: I0318 14:32:15.315330 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:32:15 crc kubenswrapper[4857]: E0318 14:32:15.315698 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:32:15 crc kubenswrapper[4857]: I0318 14:32:15.379743 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 14:32:16 crc kubenswrapper[4857]: I0318 14:32:16.363050 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4cd7203-ecc0-4c47-abd4-de4a574f24ba" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.21:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 14:32:16 crc kubenswrapper[4857]: I0318 14:32:16.363200 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4cd7203-ecc0-4c47-abd4-de4a574f24ba" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.21:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:32:21 crc kubenswrapper[4857]: I0318 14:32:21.160807 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:32:21 crc kubenswrapper[4857]: I0318 14:32:21.195877 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 14:32:22 crc kubenswrapper[4857]: I0318 14:32:22.526667 4857 generic.go:334] "Generic (PLEG): container finished" podID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerID="573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e" exitCode=137 Mar 18 14:32:22 crc kubenswrapper[4857]: I0318 14:32:22.526971 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerDied","Data":"573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e"} Mar 18 14:32:22 crc kubenswrapper[4857]: I0318 14:32:22.849278 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:32:22 crc kubenswrapper[4857]: I0318 14:32:22.849825 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 14:32:23 crc kubenswrapper[4857]: E0318 14:32:23.009199 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb273f162_887e_4d5f_8fa5_6be8fec441d2.slice/crio-573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb273f162_887e_4d5f_8fa5_6be8fec441d2.slice/crio-conmon-573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.025164 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.028834 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.037931 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.341842 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.486580 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.487111 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.487290 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.487876 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488154 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488368 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488490 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4kfr\" (UniqueName: \"kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488322 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488533 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.488562 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data\") pod \"b273f162-887e-4d5f-8fa5-6be8fec441d2\" (UID: \"b273f162-887e-4d5f-8fa5-6be8fec441d2\") " Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.489504 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.489527 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b273f162-887e-4d5f-8fa5-6be8fec441d2-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.495096 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts" (OuterVolumeSpecName: "scripts") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.495415 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr" (OuterVolumeSpecName: "kube-api-access-q4kfr") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "kube-api-access-q4kfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.523828 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.547491 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b273f162-887e-4d5f-8fa5-6be8fec441d2","Type":"ContainerDied","Data":"ab56cc8b35adfe2f684f3c7eac8ee6a71b8d0e6af1db22e1f16554293be64cd9"} Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.547601 4857 scope.go:117] "RemoveContainer" containerID="573991d64a29a6a68480f247e0163e3867dcc776e99e1fa0d9202fdd89409f3e" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.548063 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.561005 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.597429 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4kfr\" (UniqueName: \"kubernetes.io/projected/b273f162-887e-4d5f-8fa5-6be8fec441d2-kube-api-access-q4kfr\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.597465 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.597477 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.604998 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.614687 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.972053 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:23 crc kubenswrapper[4857]: I0318 14:32:23.972122 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.017067 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data" (OuterVolumeSpecName: "config-data") pod "b273f162-887e-4d5f-8fa5-6be8fec441d2" (UID: "b273f162-887e-4d5f-8fa5-6be8fec441d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.074656 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b273f162-887e-4d5f-8fa5-6be8fec441d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.091123 4857 scope.go:117] "RemoveContainer" containerID="f08d9cfc7184fdb65071f4e091ca90939001db2e9118d0f0503961d938f45d98" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.117213 4857 scope.go:117] "RemoveContainer" containerID="0007509499b4654a972b18d6d44b6cc05396c8cf92055398aaad39b4245844c8" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.138305 4857 scope.go:117] "RemoveContainer" containerID="be83fea8b330c84bfe4905c84779a1906ba92656b455d8c6c513455cc0fb668f" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.193361 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.213047 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.229969 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:32:24 crc kubenswrapper[4857]: E0318 14:32:24.230802 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="proxy-httpd" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.230839 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="proxy-httpd" Mar 18 14:32:24 crc kubenswrapper[4857]: E0318 14:32:24.230881 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472573ed-cb00-48c2-b290-adb0a4f69739" containerName="oc" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.230889 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="472573ed-cb00-48c2-b290-adb0a4f69739" containerName="oc" Mar 18 14:32:24 crc kubenswrapper[4857]: E0318 14:32:24.230919 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-notification-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.230926 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-notification-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: E0318 14:32:24.230957 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="sg-core" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.230963 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="sg-core" Mar 18 14:32:24 crc kubenswrapper[4857]: E0318 14:32:24.230976 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-central-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.230982 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-central-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.231308 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-central-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.231334 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="472573ed-cb00-48c2-b290-adb0a4f69739" containerName="oc" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.231345 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="proxy-httpd" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.231357 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="sg-core" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.231375 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" containerName="ceilometer-notification-agent" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.234007 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.239012 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.239358 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.239517 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.249735 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.382915 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383004 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383030 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2h8d\" (UniqueName: \"kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383069 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383547 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383700 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383771 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.383995 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.486512 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.486649 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.486741 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.486814 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2h8d\" (UniqueName: \"kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.486925 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.487287 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.487382 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.487440 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.487502 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.487842 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.728216 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.728259 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.731783 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.740127 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.743833 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2h8d\" (UniqueName: \"kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.747714 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " pod="openstack/ceilometer-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.764252 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.809855 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.832404 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 14:32:24 crc kubenswrapper[4857]: I0318 14:32:24.857403 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:32:25 crc kubenswrapper[4857]: I0318 14:32:25.179722 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b273f162-887e-4d5f-8fa5-6be8fec441d2" path="/var/lib/kubelet/pods/b273f162-887e-4d5f-8fa5-6be8fec441d2/volumes" Mar 18 14:32:25 crc kubenswrapper[4857]: I0318 14:32:25.565880 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:32:25 crc kubenswrapper[4857]: I0318 14:32:25.807132 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerStarted","Data":"f4a0ef740d72a660b5fbbd51fd6e12244e1a1bbf3dd0c011a5de8b186a14919b"} Mar 18 14:32:25 crc kubenswrapper[4857]: I0318 14:32:25.816314 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 14:32:27 crc kubenswrapper[4857]: I0318 14:32:27.022917 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerStarted","Data":"b1360e8390bf5085f1b53430e060478f994984078b398797d6173b525bb1b063"} Mar 18 14:32:27 crc kubenswrapper[4857]: I0318 14:32:27.194951 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:32:27 crc kubenswrapper[4857]: E0318 14:32:27.195552 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:32:28 crc kubenswrapper[4857]: I0318 14:32:28.037037 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerStarted","Data":"80e7fbafa7c3cfb940f4f588100abbc25248c2f1f20feededdf5a6a309bcce31"} Mar 18 14:32:29 crc kubenswrapper[4857]: I0318 14:32:29.444074 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerStarted","Data":"619e75b53aa5d7714c81d93b85314378a0d6c2dbeea1938a0f0932a7a54c9794"} Mar 18 14:32:32 crc kubenswrapper[4857]: I0318 14:32:32.623598 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerStarted","Data":"1b8f3ecaac725e0d827233577d426f32898d84b37391a8825b4084cfba4bbd5d"} Mar 18 14:32:32 crc kubenswrapper[4857]: I0318 14:32:32.624006 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:32:32 crc kubenswrapper[4857]: I0318 14:32:32.659255 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.152502445 podStartE2EDuration="8.659225907s" podCreationTimestamp="2026-03-18 14:32:24 +0000 UTC" firstStartedPulling="2026-03-18 14:32:25.581105021 +0000 UTC m=+1929.710233488" lastFinishedPulling="2026-03-18 14:32:31.087828493 +0000 UTC m=+1935.216956950" observedRunningTime="2026-03-18 14:32:32.651274026 +0000 UTC m=+1936.780402503" watchObservedRunningTime="2026-03-18 14:32:32.659225907 +0000 UTC m=+1936.788354364" Mar 18 14:32:41 crc kubenswrapper[4857]: I0318 14:32:41.187762 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:32:41 crc kubenswrapper[4857]: E0318 14:32:41.196612 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:32:52 crc kubenswrapper[4857]: I0318 14:32:52.164840 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:32:52 crc kubenswrapper[4857]: E0318 14:32:52.166063 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:32:54 crc kubenswrapper[4857]: I0318 14:32:54.888346 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 18 14:33:01 crc kubenswrapper[4857]: I0318 14:33:01.406886 4857 scope.go:117] "RemoveContainer" containerID="a6f24be92f9b3f98471fa568880e2b653734130042b51aab369f6306e6d36747" Mar 18 14:33:01 crc kubenswrapper[4857]: I0318 14:33:01.470532 4857 scope.go:117] "RemoveContainer" containerID="5f3b0cfd9734cfbf42cb9975364ac040be25269a12f3a7d03e93ed530d4c428c" Mar 18 14:33:01 crc kubenswrapper[4857]: I0318 14:33:01.522321 4857 scope.go:117] "RemoveContainer" containerID="fce90338e24ea1f22326564a290a06286b541c35da0399701d3f9ea0f3146e6c" Mar 18 14:33:05 crc kubenswrapper[4857]: I0318 14:33:05.164870 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:33:06 crc kubenswrapper[4857]: I0318 14:33:06.311514 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc"} Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.046230 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-4sc5j"] Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.063193 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-4sc5j"] Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.151926 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-sjzwc"] Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.154369 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.250794 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4c05d5-43c8-4aad-9052-a519d7c6d182" path="/var/lib/kubelet/pods/fd4c05d5-43c8-4aad-9052-a519d7c6d182/volumes" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.251815 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sjzwc"] Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.309473 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.310133 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrkp\" (UniqueName: \"kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.310171 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.413120 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrkp\" (UniqueName: \"kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.413166 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.413209 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.420563 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.422573 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.441367 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrkp\" (UniqueName: \"kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp\") pod \"heat-db-sync-sjzwc\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:07 crc kubenswrapper[4857]: I0318 14:33:07.542169 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjzwc" Mar 18 14:33:08 crc kubenswrapper[4857]: I0318 14:33:08.318677 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sjzwc"] Mar 18 14:33:08 crc kubenswrapper[4857]: I0318 14:33:08.373553 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjzwc" event={"ID":"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e","Type":"ContainerStarted","Data":"5b7480088bde0f08096b56f6d8f9c8cf1b41285894828eaface63c24f66e9aac"} Mar 18 14:33:09 crc kubenswrapper[4857]: I0318 14:33:09.547792 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.108234 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.109221 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="proxy-httpd" containerID="cri-o://1b8f3ecaac725e0d827233577d426f32898d84b37391a8825b4084cfba4bbd5d" gracePeriod=30 Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.109221 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="sg-core" containerID="cri-o://619e75b53aa5d7714c81d93b85314378a0d6c2dbeea1938a0f0932a7a54c9794" gracePeriod=30 Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.109308 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-notification-agent" containerID="cri-o://80e7fbafa7c3cfb940f4f588100abbc25248c2f1f20feededdf5a6a309bcce31" gracePeriod=30 Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.109832 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-central-agent" containerID="cri-o://b1360e8390bf5085f1b53430e060478f994984078b398797d6173b525bb1b063" gracePeriod=30 Mar 18 14:33:10 crc kubenswrapper[4857]: I0318 14:33:10.859249 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437503 4857 generic.go:334] "Generic (PLEG): container finished" podID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerID="1b8f3ecaac725e0d827233577d426f32898d84b37391a8825b4084cfba4bbd5d" exitCode=0 Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437559 4857 generic.go:334] "Generic (PLEG): container finished" podID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerID="619e75b53aa5d7714c81d93b85314378a0d6c2dbeea1938a0f0932a7a54c9794" exitCode=2 Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437569 4857 generic.go:334] "Generic (PLEG): container finished" podID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerID="b1360e8390bf5085f1b53430e060478f994984078b398797d6173b525bb1b063" exitCode=0 Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437598 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerDied","Data":"1b8f3ecaac725e0d827233577d426f32898d84b37391a8825b4084cfba4bbd5d"} Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437636 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerDied","Data":"619e75b53aa5d7714c81d93b85314378a0d6c2dbeea1938a0f0932a7a54c9794"} Mar 18 14:33:11 crc kubenswrapper[4857]: I0318 14:33:11.437652 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerDied","Data":"b1360e8390bf5085f1b53430e060478f994984078b398797d6173b525bb1b063"} Mar 18 14:33:14 crc kubenswrapper[4857]: I0318 14:33:14.539336 4857 generic.go:334] "Generic (PLEG): container finished" podID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerID="80e7fbafa7c3cfb940f4f588100abbc25248c2f1f20feededdf5a6a309bcce31" exitCode=0 Mar 18 14:33:14 crc kubenswrapper[4857]: I0318 14:33:14.539659 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerDied","Data":"80e7fbafa7c3cfb940f4f588100abbc25248c2f1f20feededdf5a6a309bcce31"} Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.292795 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.481505 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.481596 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.481636 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.481684 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2h8d\" (UniqueName: \"kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.481881 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.482054 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.482124 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.482153 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml\") pod \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\" (UID: \"83dfa1e8-9b49-4786-9830-3821d7fbf8cd\") " Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.482487 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.483072 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.483476 4857 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.483501 4857 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.517349 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts" (OuterVolumeSpecName: "scripts") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.544203 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d" (OuterVolumeSpecName: "kube-api-access-q2h8d") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "kube-api-access-q2h8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.591461 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2h8d\" (UniqueName: \"kubernetes.io/projected/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-kube-api-access-q2h8d\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.591502 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.604977 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.605235 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83dfa1e8-9b49-4786-9830-3821d7fbf8cd","Type":"ContainerDied","Data":"f4a0ef740d72a660b5fbbd51fd6e12244e1a1bbf3dd0c011a5de8b186a14919b"} Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.605301 4857 scope.go:117] "RemoveContainer" containerID="1b8f3ecaac725e0d827233577d426f32898d84b37391a8825b4084cfba4bbd5d" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.605537 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.671029 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.675167 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.706131 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.706186 4857 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.706204 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.782729 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data" (OuterVolumeSpecName: "config-data") pod "83dfa1e8-9b49-4786-9830-3821d7fbf8cd" (UID: "83dfa1e8-9b49-4786-9830-3821d7fbf8cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.792793 4857 scope.go:117] "RemoveContainer" containerID="619e75b53aa5d7714c81d93b85314378a0d6c2dbeea1938a0f0932a7a54c9794" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.810089 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dfa1e8-9b49-4786-9830-3821d7fbf8cd-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.829649 4857 scope.go:117] "RemoveContainer" containerID="80e7fbafa7c3cfb940f4f588100abbc25248c2f1f20feededdf5a6a309bcce31" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.867246 4857 scope.go:117] "RemoveContainer" containerID="b1360e8390bf5085f1b53430e060478f994984078b398797d6173b525bb1b063" Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.952710 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.977325 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:15 crc kubenswrapper[4857]: I0318 14:33:15.998115 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:16 crc kubenswrapper[4857]: E0318 14:33:15.998885 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-central-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.998912 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-central-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: E0318 14:33:15.998968 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-notification-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.998978 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-notification-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: E0318 14:33:15.999006 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="proxy-httpd" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999014 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="proxy-httpd" Mar 18 14:33:16 crc kubenswrapper[4857]: E0318 14:33:15.999037 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="sg-core" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999047 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="sg-core" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999319 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="proxy-httpd" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999355 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-central-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999380 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="sg-core" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:15.999395 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" containerName="ceilometer-notification-agent" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.002780 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.014606 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" containerID="cri-o://a66aa366b37615f34867b83e13af04a4bf6bc0287e8447d4b5651f10313f4b1b" gracePeriod=604795 Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.016488 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.016696 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.017030 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.017591 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.017991 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018114 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018215 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-run-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018319 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-scripts\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018510 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr2f\" (UniqueName: \"kubernetes.io/projected/97a08b04-cfff-4c38-90d4-aa20b69ade73-kube-api-access-mzr2f\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018626 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-log-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.018775 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-config-data\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.052675 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.196595 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.196697 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.196810 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-run-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.196850 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-scripts\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.196940 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzr2f\" (UniqueName: \"kubernetes.io/projected/97a08b04-cfff-4c38-90d4-aa20b69ade73-kube-api-access-mzr2f\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.197016 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-log-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.197097 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-config-data\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.197251 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.198087 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-log-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.198171 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97a08b04-cfff-4c38-90d4-aa20b69ade73-run-httpd\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.202877 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.203959 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.217680 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.218394 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-config-data\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.228811 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzr2f\" (UniqueName: \"kubernetes.io/projected/97a08b04-cfff-4c38-90d4-aa20b69ade73-kube-api-access-mzr2f\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.232711 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97a08b04-cfff-4c38-90d4-aa20b69ade73-scripts\") pod \"ceilometer-0\" (UID: \"97a08b04-cfff-4c38-90d4-aa20b69ade73\") " pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.264684 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" containerID="cri-o://7e45f48edb184b3f99d6359ccd7e9ebc2ef57a7227a04ffb4564d796cb97a864" gracePeriod=604794 Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.342460 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 18 14:33:16 crc kubenswrapper[4857]: I0318 14:33:16.981663 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 18 14:33:17 crc kubenswrapper[4857]: I0318 14:33:17.193573 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83dfa1e8-9b49-4786-9830-3821d7fbf8cd" path="/var/lib/kubelet/pods/83dfa1e8-9b49-4786-9830-3821d7fbf8cd/volumes" Mar 18 14:33:17 crc kubenswrapper[4857]: I0318 14:33:17.650644 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"e7c724b374b6b35a95721b3ae0d9bfe16f7c9ab507d9655b320efeb9eceae280"} Mar 18 14:33:19 crc kubenswrapper[4857]: I0318 14:33:19.428928 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: connect: connection refused" Mar 18 14:33:19 crc kubenswrapper[4857]: I0318 14:33:19.857330 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: connect: connection refused" Mar 18 14:33:23 crc kubenswrapper[4857]: I0318 14:33:23.738005 4857 generic.go:334] "Generic (PLEG): container finished" podID="865ce56e-0936-4018-9dd8-17343c925b91" containerID="a66aa366b37615f34867b83e13af04a4bf6bc0287e8447d4b5651f10313f4b1b" exitCode=0 Mar 18 14:33:23 crc kubenswrapper[4857]: I0318 14:33:23.738108 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerDied","Data":"a66aa366b37615f34867b83e13af04a4bf6bc0287e8447d4b5651f10313f4b1b"} Mar 18 14:33:23 crc kubenswrapper[4857]: I0318 14:33:23.741849 4857 generic.go:334] "Generic (PLEG): container finished" podID="062e357c-5b17-403b-add2-71ce46b3423a" containerID="7e45f48edb184b3f99d6359ccd7e9ebc2ef57a7227a04ffb4564d796cb97a864" exitCode=0 Mar 18 14:33:23 crc kubenswrapper[4857]: I0318 14:33:23.741907 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerDied","Data":"7e45f48edb184b3f99d6359ccd7e9ebc2ef57a7227a04ffb4564d796cb97a864"} Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.326660 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.334355 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343434 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343509 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpz2x\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343584 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343626 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343654 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343725 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343813 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343900 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.343957 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f269c\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.344054 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.344080 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.344133 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.344817 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.467257 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.468439 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.471646 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x" (OuterVolumeSpecName: "kube-api-access-gpz2x") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "kube-api-access-gpz2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.472150 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info" (OuterVolumeSpecName: "pod-info") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492404 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492560 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492638 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492636 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492796 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492854 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492910 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492937 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf\") pod \"865ce56e-0936-4018-9dd8-17343c925b91\" (UID: \"865ce56e-0936-4018-9dd8-17343c925b91\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.492973 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.493016 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.493412 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.493945 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c" (OuterVolumeSpecName: "kube-api-access-f269c") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "kube-api-access-f269c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502424 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpz2x\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-kube-api-access-gpz2x\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502458 4857 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502473 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502484 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502495 4857 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/865ce56e-0936-4018-9dd8-17343c925b91-pod-info\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502505 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f269c\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-kube-api-access-f269c\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.502515 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.509686 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.510416 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info" (OuterVolumeSpecName: "pod-info") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.527124 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.529172 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.529212 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.569247 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data" (OuterVolumeSpecName: "config-data") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.575934 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.602814 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605459 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605611 4857 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/062e357c-5b17-403b-add2-71ce46b3423a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605703 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605804 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605872 4857 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605988 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.606079 4857 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/062e357c-5b17-403b-add2-71ce46b3423a-pod-info\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.606141 4857 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/865ce56e-0936-4018-9dd8-17343c925b91-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.605499 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf" (OuterVolumeSpecName: "server-conf") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.626374 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data" (OuterVolumeSpecName: "config-data") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.646990 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2" (OuterVolumeSpecName: "persistence") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "pvc-8efa5760-232c-456b-b2ce-da089306e1b2". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.690655 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf" (OuterVolumeSpecName: "server-conf") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: E0318 14:33:33.710463 4857 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/vol_data.json]: open /var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"062e357c-5b17-403b-add2-71ce46b3423a\" (UID: \"062e357c-5b17-403b-add2-71ce46b3423a\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/vol_data.json]: open /var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes/kubernetes.io~csi/pvc-085057a2-b093-446e-a066-c90d3f1d6ee0/vol_data.json: no such file or directory" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.711641 4857 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/865ce56e-0936-4018-9dd8-17343c925b91-server-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.711695 4857 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-server-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.711708 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/062e357c-5b17-403b-add2-71ce46b3423a-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.711935 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") on node \"crc\" " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.783813 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0" (OuterVolumeSpecName: "persistence") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "pvc-085057a2-b093-446e-a066-c90d3f1d6ee0". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.788339 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.789741 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8efa5760-232c-456b-b2ce-da089306e1b2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2") on node "crc" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.793326 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "865ce56e-0936-4018-9dd8-17343c925b91" (UID: "865ce56e-0936-4018-9dd8-17343c925b91"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.814365 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/865ce56e-0936-4018-9dd8-17343c925b91-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.814415 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.814448 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") on node \"crc\" " Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.876728 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "062e357c-5b17-403b-add2-71ce46b3423a" (UID: "062e357c-5b17-403b-add2-71ce46b3423a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.892324 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.893946 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-085057a2-b093-446e-a066-c90d3f1d6ee0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0") on node "crc" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.917943 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.918017 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/062e357c-5b17-403b-add2-71ce46b3423a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.921099 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"865ce56e-0936-4018-9dd8-17343c925b91","Type":"ContainerDied","Data":"83826f6e772fdebc532573e31d9113b71dfddc80ef3c32684b0eaae99ce6ccc1"} Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.921240 4857 scope.go:117] "RemoveContainer" containerID="a66aa366b37615f34867b83e13af04a4bf6bc0287e8447d4b5651f10313f4b1b" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.921264 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.927904 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"062e357c-5b17-403b-add2-71ce46b3423a","Type":"ContainerDied","Data":"33e778216fed3d6a19e183a1d38d10302b31fae6d88e402c5278fc357e2a9b70"} Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.928190 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:33:33 crc kubenswrapper[4857]: I0318 14:33:33.981255 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.002983 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.023029 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.037411 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.067390 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:34 crc kubenswrapper[4857]: E0318 14:33:34.068058 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="setup-container" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068076 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="setup-container" Mar 18 14:33:34 crc kubenswrapper[4857]: E0318 14:33:34.068099 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068107 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: E0318 14:33:34.068132 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068141 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: E0318 14:33:34.068176 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="setup-container" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068186 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="setup-container" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068488 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.068505 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.070174 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.070232 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.070352 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.073143 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.073349 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.073411 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.073657 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.073783 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.075049 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.075232 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.075345 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s56f2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.122570 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.122888 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.123009 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsch\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-kube-api-access-rcsch\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.123561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e447043a-8fa6-4b8c-b103-57fd3b484088-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.124310 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.124476 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.124664 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.124816 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e447043a-8fa6-4b8c-b103-57fd3b484088-pod-info\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.125213 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-server-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.125277 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.126965 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-config-data\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.134691 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.376550 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsch\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-kube-api-access-rcsch\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.376891 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e447043a-8fa6-4b8c-b103-57fd3b484088-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.376946 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377045 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpx5m\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-kube-api-access-dpx5m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377080 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf037310-f1c6-404e-b55a-f23c33b43373-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377135 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377156 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377212 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377238 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf037310-f1c6-404e-b55a-f23c33b43373-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377352 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377380 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e447043a-8fa6-4b8c-b103-57fd3b484088-pod-info\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377428 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377476 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-server-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377554 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377593 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-config-data\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377651 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377687 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377769 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377789 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.377827 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.378853 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.381364 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.382114 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-server-conf\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.382135 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e447043a-8fa6-4b8c-b103-57fd3b484088-config-data\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.382344 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.388142 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.388396 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e447043a-8fa6-4b8c-b103-57fd3b484088-pod-info\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.388494 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.388518 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5900510260eda10d1720a81f8ea5bb3416f28283122ef270378e9e5c921d5a4b/globalmount\"" pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.401187 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.401260 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e447043a-8fa6-4b8c-b103-57fd3b484088-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.413959 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsch\" (UniqueName: \"kubernetes.io/projected/e447043a-8fa6-4b8c-b103-57fd3b484088-kube-api-access-rcsch\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.428341 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="865ce56e-0936-4018-9dd8-17343c925b91" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.133:5671: i/o timeout" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.481674 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.481823 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.481891 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.481980 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482005 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482054 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482147 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpx5m\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-kube-api-access-dpx5m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482171 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf037310-f1c6-404e-b55a-f23c33b43373-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482257 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482279 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.482299 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf037310-f1c6-404e-b55a-f23c33b43373-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.483485 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.483990 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.487621 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.487743 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.487906 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf037310-f1c6-404e-b55a-f23c33b43373-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.488156 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf037310-f1c6-404e-b55a-f23c33b43373-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.488412 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.488445 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6e5296cdc2d629d606120081853b2f8996ebb05829b621cae3a3133c67b1a52/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.488558 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.490712 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf037310-f1c6-404e-b55a-f23c33b43373-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.493846 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.503778 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpx5m\" (UniqueName: \"kubernetes.io/projected/cf037310-f1c6-404e-b55a-f23c33b43373-kube-api-access-dpx5m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.733779 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8efa5760-232c-456b-b2ce-da089306e1b2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8efa5760-232c-456b-b2ce-da089306e1b2\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf037310-f1c6-404e-b55a-f23c33b43373\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.779316 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-085057a2-b093-446e-a066-c90d3f1d6ee0\") pod \"rabbitmq-server-2\" (UID: \"e447043a-8fa6-4b8c-b103-57fd3b484088\") " pod="openstack/rabbitmq-server-2" Mar 18 14:33:34 crc kubenswrapper[4857]: I0318 14:33:34.856847 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="062e357c-5b17-403b-add2-71ce46b3423a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.136:5671: i/o timeout" Mar 18 14:33:35 crc kubenswrapper[4857]: I0318 14:33:35.218649 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:33:35 crc kubenswrapper[4857]: I0318 14:33:35.243291 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Mar 18 14:33:35 crc kubenswrapper[4857]: I0318 14:33:35.278378 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062e357c-5b17-403b-add2-71ce46b3423a" path="/var/lib/kubelet/pods/062e357c-5b17-403b-add2-71ce46b3423a/volumes" Mar 18 14:33:35 crc kubenswrapper[4857]: I0318 14:33:35.280668 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865ce56e-0936-4018-9dd8-17343c925b91" path="/var/lib/kubelet/pods/865ce56e-0936-4018-9dd8-17343c925b91/volumes" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.227239 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gjg8h"] Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.230682 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gjg8h"] Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.230877 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.234039 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313080 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313590 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313621 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313663 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrfc\" (UniqueName: \"kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313733 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313781 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.313857 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.336805 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gjg8h"] Mar 18 14:33:39 crc kubenswrapper[4857]: E0318 14:33:39.338040 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-ctrfc openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" podUID="2890fae0-ffda-442e-86ba-17f16807191b" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.415986 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.416322 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.416469 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.416675 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.416888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.416983 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.417119 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrfc\" (UniqueName: \"kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.417150 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.417669 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.418217 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.420296 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.421650 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.421940 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.430354 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-r9gxm"] Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.432684 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.445866 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrfc\" (UniqueName: \"kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc\") pod \"dnsmasq-dns-7d84b4d45c-gjg8h\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.486508 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-r9gxm"] Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.498649 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.518517 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-config\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.518605 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.518796 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.518926 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.518996 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.519061 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.519348 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2n6\" (UniqueName: \"kubernetes.io/projected/b3a981c6-60b8-4191-a6c1-111dc8997817-kube-api-access-7t2n6\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.519499 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.957930 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t2n6\" (UniqueName: \"kubernetes.io/projected/b3a981c6-60b8-4191-a6c1-111dc8997817-kube-api-access-7t2n6\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958038 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-config\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958150 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958271 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958359 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958430 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.958463 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.959438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.960400 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-config\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.960986 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.961566 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.962743 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:39 crc kubenswrapper[4857]: I0318 14:33:39.963470 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3a981c6-60b8-4191-a6c1-111dc8997817-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:39.995869 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t2n6\" (UniqueName: \"kubernetes.io/projected/b3a981c6-60b8-4191-a6c1-111dc8997817-kube-api-access-7t2n6\") pod \"dnsmasq-dns-6f6df4f56c-r9gxm\" (UID: \"b3a981c6-60b8-4191-a6c1-111dc8997817\") " pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.060307 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.060615 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.060927 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.061119 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctrfc\" (UniqueName: \"kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.061339 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.061575 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.061819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.062176 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.062650 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam\") pod \"2890fae0-ffda-442e-86ba-17f16807191b\" (UID: \"2890fae0-ffda-442e-86ba-17f16807191b\") " Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.062893 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.063541 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.063767 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config" (OuterVolumeSpecName: "config") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.064628 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.064879 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.064982 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.065857 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.065922 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.066463 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc" (OuterVolumeSpecName: "kube-api-access-ctrfc") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "kube-api-access-ctrfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.070066 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2890fae0-ffda-442e-86ba-17f16807191b" (UID: "2890fae0-ffda-442e-86ba-17f16807191b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.111650 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.169742 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2890fae0-ffda-442e-86ba-17f16807191b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.169818 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctrfc\" (UniqueName: \"kubernetes.io/projected/2890fae0-ffda-442e-86ba-17f16807191b-kube-api-access-ctrfc\") on node \"crc\" DevicePath \"\"" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.517124 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-gjg8h" Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.631145 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gjg8h"] Mar 18 14:33:40 crc kubenswrapper[4857]: I0318 14:33:40.646718 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-gjg8h"] Mar 18 14:33:41 crc kubenswrapper[4857]: I0318 14:33:41.660347 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2890fae0-ffda-442e-86ba-17f16807191b" path="/var/lib/kubelet/pods/2890fae0-ffda-442e-86ba-17f16807191b/volumes" Mar 18 14:33:43 crc kubenswrapper[4857]: I0318 14:33:43.233917 4857 scope.go:117] "RemoveContainer" containerID="7d1427952d362233c9d1826cf66228a45035946c097dd5362c988677f4388a9b" Mar 18 14:33:43 crc kubenswrapper[4857]: E0318 14:33:43.587810 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 18 14:33:43 crc kubenswrapper[4857]: E0318 14:33:43.587921 4857 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 18 14:33:43 crc kubenswrapper[4857]: E0318 14:33:43.588164 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnrkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-sjzwc_openstack(d1b19cf8-b3a5-41a0-b839-ec48b892ee5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:33:43 crc kubenswrapper[4857]: E0318 14:33:43.590131 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-sjzwc" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" Mar 18 14:33:43 crc kubenswrapper[4857]: E0318 14:33:43.683376 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-sjzwc" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" Mar 18 14:33:49 crc kubenswrapper[4857]: E0318 14:33:49.501424 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Mar 18 14:33:49 crc kubenswrapper[4857]: E0318 14:33:49.502205 4857 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Mar 18 14:33:49 crc kubenswrapper[4857]: E0318 14:33:49.502483 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h6fh656h688h566h84h59fh57fh75h589h5d7h66dhdh567h77hc5h547h597hb6h657h57ch64dh96h55fh8ch5cch64h55fh5c9h649h689h556q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzr2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(97a08b04-cfff-4c38-90d4-aa20b69ade73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 14:33:49 crc kubenswrapper[4857]: I0318 14:33:49.662247 4857 scope.go:117] "RemoveContainer" containerID="7e45f48edb184b3f99d6359ccd7e9ebc2ef57a7227a04ffb4564d796cb97a864" Mar 18 14:33:49 crc kubenswrapper[4857]: I0318 14:33:49.853772 4857 scope.go:117] "RemoveContainer" containerID="271778425daaf4fd5103cf0e854ebbdd9d1759a853d19656e12ae26244a5f2f6" Mar 18 14:33:50 crc kubenswrapper[4857]: I0318 14:33:50.537312 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 14:33:51 crc kubenswrapper[4857]: I0318 14:33:51.081502 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf037310-f1c6-404e-b55a-f23c33b43373","Type":"ContainerStarted","Data":"7dd5d5873d6df55252f2e03c0ca86169102052c7f9351ce1fd4bffc0bdcaa0a5"} Mar 18 14:33:51 crc kubenswrapper[4857]: I0318 14:33:51.106653 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-r9gxm"] Mar 18 14:33:51 crc kubenswrapper[4857]: W0318 14:33:51.176108 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode447043a_8fa6_4b8c_b103_57fd3b484088.slice/crio-3812d9cd00bf3d413c845932ad9bd8bb15b343cc4ac91c69870d2a0ab1b633f9 WatchSource:0}: Error finding container 3812d9cd00bf3d413c845932ad9bd8bb15b343cc4ac91c69870d2a0ab1b633f9: Status 404 returned error can't find the container with id 3812d9cd00bf3d413c845932ad9bd8bb15b343cc4ac91c69870d2a0ab1b633f9 Mar 18 14:33:51 crc kubenswrapper[4857]: I0318 14:33:51.181586 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Mar 18 14:33:52 crc kubenswrapper[4857]: I0318 14:33:52.097264 4857 generic.go:334] "Generic (PLEG): container finished" podID="b3a981c6-60b8-4191-a6c1-111dc8997817" containerID="25d17ef5a8901876142ab8d763473dd01f1c80edaf1bce1adadf2dff89fae8b1" exitCode=0 Mar 18 14:33:52 crc kubenswrapper[4857]: I0318 14:33:52.097505 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" event={"ID":"b3a981c6-60b8-4191-a6c1-111dc8997817","Type":"ContainerDied","Data":"25d17ef5a8901876142ab8d763473dd01f1c80edaf1bce1adadf2dff89fae8b1"} Mar 18 14:33:52 crc kubenswrapper[4857]: I0318 14:33:52.097580 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" event={"ID":"b3a981c6-60b8-4191-a6c1-111dc8997817","Type":"ContainerStarted","Data":"694438dc462c4e7aa9eeb22e098a4231b0e0b5e6c0854123c3a59ca9e3bc160b"} Mar 18 14:33:52 crc kubenswrapper[4857]: I0318 14:33:52.107986 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"03084a94c4024891d1c7b2d86e5461ab71155ca5fdae221c453f1459db2d0d03"} Mar 18 14:33:52 crc kubenswrapper[4857]: I0318 14:33:52.110513 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"e447043a-8fa6-4b8c-b103-57fd3b484088","Type":"ContainerStarted","Data":"3812d9cd00bf3d413c845932ad9bd8bb15b343cc4ac91c69870d2a0ab1b633f9"} Mar 18 14:33:53 crc kubenswrapper[4857]: I0318 14:33:53.125496 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" event={"ID":"b3a981c6-60b8-4191-a6c1-111dc8997817","Type":"ContainerStarted","Data":"d2ffc26d0fcbe5cb859b121474be22925cb49d169ad8ad6f4d89e1b115b77fa4"} Mar 18 14:33:53 crc kubenswrapper[4857]: I0318 14:33:53.126129 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:33:53 crc kubenswrapper[4857]: I0318 14:33:53.161433 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" podStartSLOduration=14.161407028 podStartE2EDuration="14.161407028s" podCreationTimestamp="2026-03-18 14:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:33:53.153340675 +0000 UTC m=+2017.282469132" watchObservedRunningTime="2026-03-18 14:33:53.161407028 +0000 UTC m=+2017.290535485" Mar 18 14:33:54 crc kubenswrapper[4857]: I0318 14:33:54.139076 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"bf6a69dc64eb3a8be5a813dd1e9341bf0214efb5e244881ebed9a34b19f1460a"} Mar 18 14:33:54 crc kubenswrapper[4857]: I0318 14:33:54.142126 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf037310-f1c6-404e-b55a-f23c33b43373","Type":"ContainerStarted","Data":"8b3f37d75a5a61621f86f423f7e7f02191dfb7cd05cb4bc9254dbbbcf29a0c37"} Mar 18 14:33:54 crc kubenswrapper[4857]: I0318 14:33:54.145021 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"e447043a-8fa6-4b8c-b103-57fd3b484088","Type":"ContainerStarted","Data":"f4fed0805f406df9aaf470317c8bd7c1ba9217c108d1e15e604c368eabc01689"} Mar 18 14:33:57 crc kubenswrapper[4857]: E0318 14:33:57.499969 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" Mar 18 14:33:58 crc kubenswrapper[4857]: I0318 14:33:58.519950 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjzwc" event={"ID":"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e","Type":"ContainerStarted","Data":"33d5a7221a6c74b334204ec059565087dc8d0fa98bb8562d10c7cf520cd07530"} Mar 18 14:33:58 crc kubenswrapper[4857]: I0318 14:33:58.539233 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"97352bb0d656bed3349666451b9e2acb84e53fe3314e2d0f6900c95566402f9c"} Mar 18 14:33:58 crc kubenswrapper[4857]: I0318 14:33:58.540742 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 18 14:33:58 crc kubenswrapper[4857]: E0318 14:33:58.555260 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" Mar 18 14:33:59 crc kubenswrapper[4857]: I0318 14:33:59.583778 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-sjzwc" podStartSLOduration=3.101805729 podStartE2EDuration="52.583723453s" podCreationTimestamp="2026-03-18 14:33:07 +0000 UTC" firstStartedPulling="2026-03-18 14:33:08.336316559 +0000 UTC m=+1972.465445016" lastFinishedPulling="2026-03-18 14:33:57.818234263 +0000 UTC m=+2021.947362740" observedRunningTime="2026-03-18 14:33:59.579003115 +0000 UTC m=+2023.708131602" watchObservedRunningTime="2026-03-18 14:33:59.583723453 +0000 UTC m=+2023.712851930" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.119185 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-r9gxm" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.153987 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564074-5f7nq"] Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.160099 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.166272 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.166493 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.166539 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.177520 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564074-5f7nq"] Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.244663 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.245031 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="dnsmasq-dns" containerID="cri-o://da3607bc8acd0f0ae6f0ede898fa5a0856f6943433c0bcd939454ac94f4e60e9" gracePeriod=10 Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.271238 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6m5k\" (UniqueName: \"kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k\") pod \"auto-csr-approver-29564074-5f7nq\" (UID: \"ff2dd36c-2e2f-439d-89d2-444c435f7749\") " pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.375582 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6m5k\" (UniqueName: \"kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k\") pod \"auto-csr-approver-29564074-5f7nq\" (UID: \"ff2dd36c-2e2f-439d-89d2-444c435f7749\") " pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.400360 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6m5k\" (UniqueName: \"kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k\") pod \"auto-csr-approver-29564074-5f7nq\" (UID: \"ff2dd36c-2e2f-439d-89d2-444c435f7749\") " pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:00 crc kubenswrapper[4857]: I0318 14:34:00.882869 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:01 crc kubenswrapper[4857]: I0318 14:34:00.917010 4857 generic.go:334] "Generic (PLEG): container finished" podID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerID="da3607bc8acd0f0ae6f0ede898fa5a0856f6943433c0bcd939454ac94f4e60e9" exitCode=0 Mar 18 14:34:01 crc kubenswrapper[4857]: I0318 14:34:00.917074 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" event={"ID":"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb","Type":"ContainerDied","Data":"da3607bc8acd0f0ae6f0ede898fa5a0856f6943433c0bcd939454ac94f4e60e9"} Mar 18 14:34:01 crc kubenswrapper[4857]: I0318 14:34:01.789801 4857 scope.go:117] "RemoveContainer" containerID="e9f5f32a7cff1e1d677cd5eec47c2d2667cba16c41b31530969d13a4ecaaced9" Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.254084 4857 scope.go:117] "RemoveContainer" containerID="12907e0236b8db836d4b44514e4494d4cb6867835367a755b97d1eccbe8e64f7" Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.808384 4857 scope.go:117] "RemoveContainer" containerID="12ba4cc9b33a1047d5091f62eabbf8f05f73ec2439592588b995457cf06505fa" Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.925795 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.995214 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" event={"ID":"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb","Type":"ContainerDied","Data":"bf4439adf5d1d428f75367ebeb4f3d61569b7678ac26da8a11406d671e3d3760"} Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.995281 4857 scope.go:117] "RemoveContainer" containerID="da3607bc8acd0f0ae6f0ede898fa5a0856f6943433c0bcd939454ac94f4e60e9" Mar 18 14:34:03 crc kubenswrapper[4857]: I0318 14:34:03.995720 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.014622 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.014707 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.014833 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.014876 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5zrr\" (UniqueName: \"kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.014908 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.015057 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb\") pod \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\" (UID: \"14ff0e14-e1cd-4d9c-8d01-f79813c13bdb\") " Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.054740 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr" (OuterVolumeSpecName: "kube-api-access-d5zrr") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "kube-api-access-d5zrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.055292 4857 scope.go:117] "RemoveContainer" containerID="61753e083ecf47dc3ca44cf0f7780de11fa06dbea2a807dba3a1591a04682646" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.122330 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5zrr\" (UniqueName: \"kubernetes.io/projected/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-kube-api-access-d5zrr\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.166743 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.171409 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.194333 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.197056 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config" (OuterVolumeSpecName: "config") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.203775 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" (UID: "14ff0e14-e1cd-4d9c-8d01-f79813c13bdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.224625 4857 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.224682 4857 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-config\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.224698 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.224715 4857 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.224727 4857 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:04 crc kubenswrapper[4857]: W0318 14:34:04.299802 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff2dd36c_2e2f_439d_89d2_444c435f7749.slice/crio-27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0 WatchSource:0}: Error finding container 27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0: Status 404 returned error can't find the container with id 27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0 Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.306900 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564074-5f7nq"] Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.446434 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:34:04 crc kubenswrapper[4857]: I0318 14:34:04.463098 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg"] Mar 18 14:34:05 crc kubenswrapper[4857]: I0318 14:34:05.026519 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" event={"ID":"ff2dd36c-2e2f-439d-89d2-444c435f7749","Type":"ContainerStarted","Data":"27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0"} Mar 18 14:34:05 crc kubenswrapper[4857]: I0318 14:34:05.029956 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"fdb2200b298e4eeb43a92b8bc952f8b97d17c90d6e2667b29c76de9b46119703"} Mar 18 14:34:05 crc kubenswrapper[4857]: I0318 14:34:05.061781 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.716577422 podStartE2EDuration="50.061739032s" podCreationTimestamp="2026-03-18 14:33:15 +0000 UTC" firstStartedPulling="2026-03-18 14:33:17.001253386 +0000 UTC m=+1981.130381843" lastFinishedPulling="2026-03-18 14:34:03.346414996 +0000 UTC m=+2027.475543453" observedRunningTime="2026-03-18 14:34:05.056372437 +0000 UTC m=+2029.185500924" watchObservedRunningTime="2026-03-18 14:34:05.061739032 +0000 UTC m=+2029.190867489" Mar 18 14:34:05 crc kubenswrapper[4857]: I0318 14:34:05.202489 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" path="/var/lib/kubelet/pods/14ff0e14-e1cd-4d9c-8d01-f79813c13bdb/volumes" Mar 18 14:34:06 crc kubenswrapper[4857]: I0318 14:34:06.702562 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-8xgbg" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.13:5353: i/o timeout" Mar 18 14:34:07 crc kubenswrapper[4857]: I0318 14:34:07.062101 4857 generic.go:334] "Generic (PLEG): container finished" podID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" containerID="33d5a7221a6c74b334204ec059565087dc8d0fa98bb8562d10c7cf520cd07530" exitCode=0 Mar 18 14:34:07 crc kubenswrapper[4857]: I0318 14:34:07.062186 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjzwc" event={"ID":"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e","Type":"ContainerDied","Data":"33d5a7221a6c74b334204ec059565087dc8d0fa98bb8562d10c7cf520cd07530"} Mar 18 14:34:08 crc kubenswrapper[4857]: I0318 14:34:08.077495 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" event={"ID":"ff2dd36c-2e2f-439d-89d2-444c435f7749","Type":"ContainerStarted","Data":"379b4dc48d0d66124e8a359a7733da0fc144f3091d65217bb127ce424e86c197"} Mar 18 14:34:08 crc kubenswrapper[4857]: I0318 14:34:08.105295 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" podStartSLOduration=5.825186004 podStartE2EDuration="8.105272431s" podCreationTimestamp="2026-03-18 14:34:00 +0000 UTC" firstStartedPulling="2026-03-18 14:34:04.303522193 +0000 UTC m=+2028.432650650" lastFinishedPulling="2026-03-18 14:34:06.58360862 +0000 UTC m=+2030.712737077" observedRunningTime="2026-03-18 14:34:08.095825013 +0000 UTC m=+2032.224953470" watchObservedRunningTime="2026-03-18 14:34:08.105272431 +0000 UTC m=+2032.234400888" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.093734 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sjzwc" event={"ID":"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e","Type":"ContainerDied","Data":"5b7480088bde0f08096b56f6d8f9c8cf1b41285894828eaface63c24f66e9aac"} Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.095646 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b7480088bde0f08096b56f6d8f9c8cf1b41285894828eaface63c24f66e9aac" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.096620 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjzwc" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.558084 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle\") pod \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.558164 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data\") pod \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.558363 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnrkp\" (UniqueName: \"kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp\") pod \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\" (UID: \"d1b19cf8-b3a5-41a0-b839-ec48b892ee5e\") " Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.586404 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp" (OuterVolumeSpecName: "kube-api-access-hnrkp") pod "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" (UID: "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e"). InnerVolumeSpecName "kube-api-access-hnrkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.667726 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnrkp\" (UniqueName: \"kubernetes.io/projected/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-kube-api-access-hnrkp\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.687528 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" (UID: "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.722414 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data" (OuterVolumeSpecName: "config-data") pod "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" (UID: "d1b19cf8-b3a5-41a0-b839-ec48b892ee5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.770341 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:09 crc kubenswrapper[4857]: I0318 14:34:09.770387 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:10 crc kubenswrapper[4857]: I0318 14:34:10.189304 4857 generic.go:334] "Generic (PLEG): container finished" podID="ff2dd36c-2e2f-439d-89d2-444c435f7749" containerID="379b4dc48d0d66124e8a359a7733da0fc144f3091d65217bb127ce424e86c197" exitCode=0 Mar 18 14:34:10 crc kubenswrapper[4857]: I0318 14:34:10.190207 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sjzwc" Mar 18 14:34:10 crc kubenswrapper[4857]: I0318 14:34:10.192079 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" event={"ID":"ff2dd36c-2e2f-439d-89d2-444c435f7749","Type":"ContainerDied","Data":"379b4dc48d0d66124e8a359a7733da0fc144f3091d65217bb127ce424e86c197"} Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.753730 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5cc4978d9b-95h9v"] Mar 18 14:34:11 crc kubenswrapper[4857]: E0318 14:34:11.763241 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" containerName="heat-db-sync" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.763281 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" containerName="heat-db-sync" Mar 18 14:34:11 crc kubenswrapper[4857]: E0318 14:34:11.765081 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="dnsmasq-dns" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.765102 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="dnsmasq-dns" Mar 18 14:34:11 crc kubenswrapper[4857]: E0318 14:34:11.765126 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="init" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.765133 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="init" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.767263 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" containerName="heat-db-sync" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.767321 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ff0e14-e1cd-4d9c-8d01-f79813c13bdb" containerName="dnsmasq-dns" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.768388 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.791780 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5cc4978d9b-95h9v"] Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.857664 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bms\" (UniqueName: \"kubernetes.io/projected/5fd5571e-79f0-4266-9b29-c60ea73a918d-kube-api-access-48bms\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.857787 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-combined-ca-bundle\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.858155 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data-custom\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.858517 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.862983 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-65d99fb45d-wdcmd"] Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.865214 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.906674 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65d99fb45d-wdcmd"] Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.929155 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5fbb7cf74b-jgtw7"] Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.932507 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.953297 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fbb7cf74b-jgtw7"] Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.953314 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972227 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-public-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972412 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msqrz\" (UniqueName: \"kubernetes.io/projected/f9fcb1a7-8c36-4029-8711-4d48a03468c3-kube-api-access-msqrz\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972532 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data-custom\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972606 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-internal-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972634 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pz7\" (UniqueName: \"kubernetes.io/projected/8b588aa2-d372-4e34-9bff-4bf820185b48-kube-api-access-n9pz7\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972746 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-combined-ca-bundle\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972802 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48bms\" (UniqueName: \"kubernetes.io/projected/5fd5571e-79f0-4266-9b29-c60ea73a918d-kube-api-access-48bms\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972827 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-internal-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972854 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-combined-ca-bundle\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972870 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-public-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972901 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972957 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data-custom\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.972987 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-combined-ca-bundle\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.973085 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data-custom\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.973118 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.973166 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.978923 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-combined-ca-bundle\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.980797 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:11 crc kubenswrapper[4857]: I0318 14:34:11.988415 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fd5571e-79f0-4266-9b29-c60ea73a918d-config-data-custom\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.005078 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48bms\" (UniqueName: \"kubernetes.io/projected/5fd5571e-79f0-4266-9b29-c60ea73a918d-kube-api-access-48bms\") pod \"heat-engine-5cc4978d9b-95h9v\" (UID: \"5fd5571e-79f0-4266-9b29-c60ea73a918d\") " pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.081275 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6m5k\" (UniqueName: \"kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k\") pod \"ff2dd36c-2e2f-439d-89d2-444c435f7749\" (UID: \"ff2dd36c-2e2f-439d-89d2-444c435f7749\") " Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084020 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data-custom\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084116 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084320 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-public-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084433 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msqrz\" (UniqueName: \"kubernetes.io/projected/f9fcb1a7-8c36-4029-8711-4d48a03468c3-kube-api-access-msqrz\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084495 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data-custom\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084565 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-internal-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084593 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pz7\" (UniqueName: \"kubernetes.io/projected/8b588aa2-d372-4e34-9bff-4bf820185b48-kube-api-access-n9pz7\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084621 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-combined-ca-bundle\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084652 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-internal-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084687 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-public-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084727 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.084840 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-combined-ca-bundle\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.085627 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k" (OuterVolumeSpecName: "kube-api-access-k6m5k") pod "ff2dd36c-2e2f-439d-89d2-444c435f7749" (UID: "ff2dd36c-2e2f-439d-89d2-444c435f7749"). InnerVolumeSpecName "kube-api-access-k6m5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.094469 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-public-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.095331 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data-custom\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.101298 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-config-data\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.101326 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-combined-ca-bundle\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.101451 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-internal-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.102110 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-public-tls-certs\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.102892 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-combined-ca-bundle\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.103557 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.104026 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f9fcb1a7-8c36-4029-8711-4d48a03468c3-config-data-custom\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.107216 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b588aa2-d372-4e34-9bff-4bf820185b48-internal-tls-certs\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.122839 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msqrz\" (UniqueName: \"kubernetes.io/projected/f9fcb1a7-8c36-4029-8711-4d48a03468c3-kube-api-access-msqrz\") pod \"heat-api-65d99fb45d-wdcmd\" (UID: \"f9fcb1a7-8c36-4029-8711-4d48a03468c3\") " pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.125657 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pz7\" (UniqueName: \"kubernetes.io/projected/8b588aa2-d372-4e34-9bff-4bf820185b48-kube-api-access-n9pz7\") pod \"heat-cfnapi-5fbb7cf74b-jgtw7\" (UID: \"8b588aa2-d372-4e34-9bff-4bf820185b48\") " pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.188336 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6m5k\" (UniqueName: \"kubernetes.io/projected/ff2dd36c-2e2f-439d-89d2-444c435f7749-kube-api-access-k6m5k\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.201522 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.214296 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.232632 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" event={"ID":"ff2dd36c-2e2f-439d-89d2-444c435f7749","Type":"ContainerDied","Data":"27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0"} Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.232687 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ba108b26d963f4f1e46417a973a9e1662308a36cb0e9833a24a2a8ea241ad0" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.232731 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564074-5f7nq" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.258226 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.700813 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564068-95p85"] Mar 18 14:34:12 crc kubenswrapper[4857]: I0318 14:34:12.731680 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564068-95p85"] Mar 18 14:34:13 crc kubenswrapper[4857]: I0318 14:34:13.429619 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b145d97a-a264-4d03-9908-a3957a52ceb0" path="/var/lib/kubelet/pods/b145d97a-a264-4d03-9908-a3957a52ceb0/volumes" Mar 18 14:34:13 crc kubenswrapper[4857]: I0318 14:34:13.487127 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5cc4978d9b-95h9v"] Mar 18 14:34:13 crc kubenswrapper[4857]: I0318 14:34:13.507277 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fbb7cf74b-jgtw7"] Mar 18 14:34:13 crc kubenswrapper[4857]: W0318 14:34:13.798150 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9fcb1a7_8c36_4029_8711_4d48a03468c3.slice/crio-393ec32a00287fd4316951ddad288f04bc88a1db1223fdd9c7c43ac7605d2d50 WatchSource:0}: Error finding container 393ec32a00287fd4316951ddad288f04bc88a1db1223fdd9c7c43ac7605d2d50: Status 404 returned error can't find the container with id 393ec32a00287fd4316951ddad288f04bc88a1db1223fdd9c7c43ac7605d2d50 Mar 18 14:34:13 crc kubenswrapper[4857]: I0318 14:34:13.808433 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-65d99fb45d-wdcmd"] Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.453131 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cc4978d9b-95h9v" event={"ID":"5fd5571e-79f0-4266-9b29-c60ea73a918d","Type":"ContainerStarted","Data":"b2447f43218d755ee7e3d7e7b16df6fa0e4e53ca026774757f2a41fbde84368c"} Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.453485 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5cc4978d9b-95h9v" event={"ID":"5fd5571e-79f0-4266-9b29-c60ea73a918d","Type":"ContainerStarted","Data":"50b326360136fc058e8a287efdcc344827a740b3800d74b456c8fef51bcd834a"} Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.453653 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.454573 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" event={"ID":"8b588aa2-d372-4e34-9bff-4bf820185b48","Type":"ContainerStarted","Data":"9125e1818d903a55dc099ae44b0fb83512fff95899c2c9d577e975ada327e817"} Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.456208 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65d99fb45d-wdcmd" event={"ID":"f9fcb1a7-8c36-4029-8711-4d48a03468c3","Type":"ContainerStarted","Data":"393ec32a00287fd4316951ddad288f04bc88a1db1223fdd9c7c43ac7605d2d50"} Mar 18 14:34:14 crc kubenswrapper[4857]: I0318 14:34:14.483203 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5cc4978d9b-95h9v" podStartSLOduration=3.483166087 podStartE2EDuration="3.483166087s" podCreationTimestamp="2026-03-18 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:34:14.474362875 +0000 UTC m=+2038.603491332" watchObservedRunningTime="2026-03-18 14:34:14.483166087 +0000 UTC m=+2038.612294544" Mar 18 14:34:16 crc kubenswrapper[4857]: I0318 14:34:16.428589 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.141252 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw"] Mar 18 14:34:20 crc kubenswrapper[4857]: E0318 14:34:20.142315 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2dd36c-2e2f-439d-89d2-444c435f7749" containerName="oc" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.142335 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2dd36c-2e2f-439d-89d2-444c435f7749" containerName="oc" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.142680 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2dd36c-2e2f-439d-89d2-444c435f7749" containerName="oc" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.143649 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.149287 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.149729 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.149885 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.150014 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.221802 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw"] Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.277447 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4v9\" (UniqueName: \"kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.277599 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.278122 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.278418 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.381767 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj4v9\" (UniqueName: \"kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.382311 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.382685 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.383023 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.389428 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.389845 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.400858 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj4v9\" (UniqueName: \"kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.442993 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.487511 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.948156 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-65d99fb45d-wdcmd" event={"ID":"f9fcb1a7-8c36-4029-8711-4d48a03468c3","Type":"ContainerStarted","Data":"358d59a31e105f150b9caa523c68410158abaeb8033d07484bb7f80f7338af3c"} Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.949015 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.987056 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" event={"ID":"8b588aa2-d372-4e34-9bff-4bf820185b48","Type":"ContainerStarted","Data":"2ba710c352326ba43653d746b0d53f832d70d45a9429e912a3823622f706f9af"} Mar 18 14:34:20 crc kubenswrapper[4857]: I0318 14:34:20.987983 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:21 crc kubenswrapper[4857]: I0318 14:34:21.004922 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-65d99fb45d-wdcmd" podStartSLOduration=4.581698787 podStartE2EDuration="10.004890055s" podCreationTimestamp="2026-03-18 14:34:11 +0000 UTC" firstStartedPulling="2026-03-18 14:34:13.802978833 +0000 UTC m=+2037.932107290" lastFinishedPulling="2026-03-18 14:34:19.226170091 +0000 UTC m=+2043.355298558" observedRunningTime="2026-03-18 14:34:20.982103281 +0000 UTC m=+2045.111231738" watchObservedRunningTime="2026-03-18 14:34:21.004890055 +0000 UTC m=+2045.134018522" Mar 18 14:34:21 crc kubenswrapper[4857]: I0318 14:34:21.015565 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" podStartSLOduration=4.303738468 podStartE2EDuration="10.015540013s" podCreationTimestamp="2026-03-18 14:34:11 +0000 UTC" firstStartedPulling="2026-03-18 14:34:13.485553841 +0000 UTC m=+2037.614682298" lastFinishedPulling="2026-03-18 14:34:19.197355386 +0000 UTC m=+2043.326483843" observedRunningTime="2026-03-18 14:34:21.012931887 +0000 UTC m=+2045.142060354" watchObservedRunningTime="2026-03-18 14:34:21.015540013 +0000 UTC m=+2045.144668470" Mar 18 14:34:22 crc kubenswrapper[4857]: I0318 14:34:22.411391 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw"] Mar 18 14:34:23 crc kubenswrapper[4857]: I0318 14:34:23.227964 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" event={"ID":"055be889-b95b-4aab-8510-682080ae57fc","Type":"ContainerStarted","Data":"dbb09e5cfff0ff90565d64c1f17fcde4cb85f05c1083650f114515e05b6300a8"} Mar 18 14:34:26 crc kubenswrapper[4857]: I0318 14:34:26.279168 4857 generic.go:334] "Generic (PLEG): container finished" podID="cf037310-f1c6-404e-b55a-f23c33b43373" containerID="8b3f37d75a5a61621f86f423f7e7f02191dfb7cd05cb4bc9254dbbbcf29a0c37" exitCode=0 Mar 18 14:34:26 crc kubenswrapper[4857]: I0318 14:34:26.279378 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf037310-f1c6-404e-b55a-f23c33b43373","Type":"ContainerDied","Data":"8b3f37d75a5a61621f86f423f7e7f02191dfb7cd05cb4bc9254dbbbcf29a0c37"} Mar 18 14:34:27 crc kubenswrapper[4857]: I0318 14:34:27.302212 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf037310-f1c6-404e-b55a-f23c33b43373","Type":"ContainerStarted","Data":"0ca37e50da4d9111d0e94e3416a27331fa240e022c9b44f8cdead6acf782c615"} Mar 18 14:34:27 crc kubenswrapper[4857]: I0318 14:34:27.302877 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:34:27 crc kubenswrapper[4857]: I0318 14:34:27.311392 4857 generic.go:334] "Generic (PLEG): container finished" podID="e447043a-8fa6-4b8c-b103-57fd3b484088" containerID="f4fed0805f406df9aaf470317c8bd7c1ba9217c108d1e15e604c368eabc01689" exitCode=0 Mar 18 14:34:27 crc kubenswrapper[4857]: I0318 14:34:27.311446 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"e447043a-8fa6-4b8c-b103-57fd3b484088","Type":"ContainerDied","Data":"f4fed0805f406df9aaf470317c8bd7c1ba9217c108d1e15e604c368eabc01689"} Mar 18 14:34:27 crc kubenswrapper[4857]: I0318 14:34:27.349230 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.349193495 podStartE2EDuration="54.349193495s" podCreationTimestamp="2026-03-18 14:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:34:27.337925651 +0000 UTC m=+2051.467054208" watchObservedRunningTime="2026-03-18 14:34:27.349193495 +0000 UTC m=+2051.478321962" Mar 18 14:34:28 crc kubenswrapper[4857]: I0318 14:34:28.330660 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"e447043a-8fa6-4b8c-b103-57fd3b484088","Type":"ContainerStarted","Data":"746d3725d04e3c80d8154dc03c2541c3b029b1e448db33a943a23aabb81af3d6"} Mar 18 14:34:29 crc kubenswrapper[4857]: I0318 14:34:29.664225 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Mar 18 14:34:29 crc kubenswrapper[4857]: I0318 14:34:29.683299 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=55.683278379 podStartE2EDuration="55.683278379s" podCreationTimestamp="2026-03-18 14:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:34:29.654665479 +0000 UTC m=+2053.783793926" watchObservedRunningTime="2026-03-18 14:34:29.683278379 +0000 UTC m=+2053.812406836" Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.354204 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5cc4978d9b-95h9v" Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.397189 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-65d99fb45d-wdcmd" Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.426606 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.426919 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5dc49b6cff-qkjws" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" containerID="cri-o://f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" gracePeriod=60 Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.537488 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:34:32 crc kubenswrapper[4857]: I0318 14:34:32.537741 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" containerID="cri-o://e7b280aa31ad8500d2907451d0a9345096499dc0024e4a8ff967cecc55c8fd9c" gracePeriod=60 Mar 18 14:34:34 crc kubenswrapper[4857]: I0318 14:34:34.003863 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5fbb7cf74b-jgtw7" Mar 18 14:34:34 crc kubenswrapper[4857]: I0318 14:34:34.103937 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:34:34 crc kubenswrapper[4857]: I0318 14:34:34.104393 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" containerID="cri-o://6d3879d227cb7eee1fa08208d8c66b82651f3cda020e3e56abf8ef13c9b4c261" gracePeriod=60 Mar 18 14:34:37 crc kubenswrapper[4857]: I0318 14:34:37.307233 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": dial tcp 10.217.0.235:8004: connect: connection refused" Mar 18 14:34:37 crc kubenswrapper[4857]: I0318 14:34:37.352413 4857 generic.go:334] "Generic (PLEG): container finished" podID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerID="e7b280aa31ad8500d2907451d0a9345096499dc0024e4a8ff967cecc55c8fd9c" exitCode=0 Mar 18 14:34:37 crc kubenswrapper[4857]: I0318 14:34:37.352471 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff5fc6d6f-phz9q" event={"ID":"f8bffa05-4039-4fa4-b173-8fc1cfa492c9","Type":"ContainerDied","Data":"e7b280aa31ad8500d2907451d0a9345096499dc0024e4a8ff967cecc55c8fd9c"} Mar 18 14:34:37 crc kubenswrapper[4857]: E0318 14:34:37.366876 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:37 crc kubenswrapper[4857]: E0318 14:34:37.376159 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:37 crc kubenswrapper[4857]: E0318 14:34:37.394025 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:37 crc kubenswrapper[4857]: E0318 14:34:37.394113 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5dc49b6cff-qkjws" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" Mar 18 14:34:37 crc kubenswrapper[4857]: I0318 14:34:37.846355 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": read tcp 10.217.0.2:52364->10.217.0.234:8000: read: connection reset by peer" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.001981 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-sb47k"] Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.018881 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-sb47k"] Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.095395 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-8pl4z"] Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.097712 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.106396 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.138620 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8pl4z"] Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.580898 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsfs\" (UniqueName: \"kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.581000 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.588040 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.588849 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.684766 4857 generic.go:334] "Generic (PLEG): container finished" podID="b715c731-2351-42c5-9f06-d99258f15771" containerID="6d3879d227cb7eee1fa08208d8c66b82651f3cda020e3e56abf8ef13c9b4c261" exitCode=0 Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.684818 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6756b6568c-jbstd" event={"ID":"b715c731-2351-42c5-9f06-d99258f15771","Type":"ContainerDied","Data":"6d3879d227cb7eee1fa08208d8c66b82651f3cda020e3e56abf8ef13c9b4c261"} Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.693710 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.693829 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsfs\" (UniqueName: \"kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.693915 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.694033 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.704111 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.705126 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.723870 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:38 crc kubenswrapper[4857]: I0318 14:34:38.734725 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsfs\" (UniqueName: \"kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs\") pod \"aodh-db-sync-8pl4z\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:39 crc kubenswrapper[4857]: I0318 14:34:39.024601 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:34:39 crc kubenswrapper[4857]: I0318 14:34:39.185952 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf63b2b-fa66-4a0d-8a89-d7a07693b00c" path="/var/lib/kubelet/pods/ecf63b2b-fa66-4a0d-8a89-d7a07693b00c/volumes" Mar 18 14:34:42 crc kubenswrapper[4857]: I0318 14:34:42.232233 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": dial tcp 10.217.0.234:8000: connect: connection refused" Mar 18 14:34:42 crc kubenswrapper[4857]: I0318 14:34:42.307380 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": dial tcp 10.217.0.235:8004: connect: connection refused" Mar 18 14:34:45 crc kubenswrapper[4857]: I0318 14:34:45.243797 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cf037310-f1c6-404e-b55a-f23c33b43373" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.25:5671: connect: connection refused" Mar 18 14:34:45 crc kubenswrapper[4857]: I0318 14:34:45.250459 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="e447043a-8fa6-4b8c-b103-57fd3b484088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.26:5671: connect: connection refused" Mar 18 14:34:46 crc kubenswrapper[4857]: E0318 14:34:46.602729 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Mar 18 14:34:46 crc kubenswrapper[4857]: E0318 14:34:46.603703 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:34:46 crc kubenswrapper[4857]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Mar 18 14:34:46 crc kubenswrapper[4857]: - hosts: all Mar 18 14:34:46 crc kubenswrapper[4857]: strategy: linear Mar 18 14:34:46 crc kubenswrapper[4857]: tasks: Mar 18 14:34:46 crc kubenswrapper[4857]: - name: Enable podified-repos Mar 18 14:34:46 crc kubenswrapper[4857]: become: true Mar 18 14:34:46 crc kubenswrapper[4857]: ansible.builtin.shell: | Mar 18 14:34:46 crc kubenswrapper[4857]: set -euxo pipefail Mar 18 14:34:46 crc kubenswrapper[4857]: pushd /var/tmp Mar 18 14:34:46 crc kubenswrapper[4857]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Mar 18 14:34:46 crc kubenswrapper[4857]: pushd repo-setup-main Mar 18 14:34:46 crc kubenswrapper[4857]: python3 -m venv ./venv Mar 18 14:34:46 crc kubenswrapper[4857]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Mar 18 14:34:46 crc kubenswrapper[4857]: ./venv/bin/repo-setup current-podified -b antelope Mar 18 14:34:46 crc kubenswrapper[4857]: popd Mar 18 14:34:46 crc kubenswrapper[4857]: rm -rf repo-setup-main Mar 18 14:34:46 crc kubenswrapper[4857]: Mar 18 14:34:46 crc kubenswrapper[4857]: Mar 18 14:34:46 crc kubenswrapper[4857]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Mar 18 14:34:46 crc kubenswrapper[4857]: edpm_override_hosts: openstack-edpm-ipam Mar 18 14:34:46 crc kubenswrapper[4857]: edpm_service_type: repo-setup Mar 18 14:34:46 crc kubenswrapper[4857]: Mar 18 14:34:46 crc kubenswrapper[4857]: Mar 18 14:34:46 crc kubenswrapper[4857]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cj4v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw_openstack(055be889-b95b-4aab-8510-682080ae57fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 18 14:34:46 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:34:46 crc kubenswrapper[4857]: E0318 14:34:46.606693 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" podUID="055be889-b95b-4aab-8510-682080ae57fc" Mar 18 14:34:46 crc kubenswrapper[4857]: E0318 14:34:46.852951 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" podUID="055be889-b95b-4aab-8510-682080ae57fc" Mar 18 14:34:47 crc kubenswrapper[4857]: I0318 14:34:47.254221 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6756b6568c-jbstd" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.234:8000/healthcheck\": dial tcp 10.217.0.234:8000: connect: connection refused" Mar 18 14:34:47 crc kubenswrapper[4857]: I0318 14:34:47.284681 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:34:47 crc kubenswrapper[4857]: E0318 14:34:47.356657 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:47 crc kubenswrapper[4857]: E0318 14:34:47.358545 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:47 crc kubenswrapper[4857]: E0318 14:34:47.370135 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 18 14:34:47 crc kubenswrapper[4857]: E0318 14:34:47.370222 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5dc49b6cff-qkjws" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.075288 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6ff5fc6d6f-phz9q" event={"ID":"f8bffa05-4039-4fa4-b173-8fc1cfa492c9","Type":"ContainerDied","Data":"edbd22bf2e47b5491c9c259fa1c102b1dc9f3b2ae9c4de7e45b41205587a9437"} Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.075361 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edbd22bf2e47b5491c9c259fa1c102b1dc9f3b2ae9c4de7e45b41205587a9437" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.077742 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6756b6568c-jbstd" event={"ID":"b715c731-2351-42c5-9f06-d99258f15771","Type":"ContainerDied","Data":"46978f420aa1faf39b758c9dbdfd0b58c7039c58983405184f784efed02fb5ae"} Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.077894 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46978f420aa1faf39b758c9dbdfd0b58c7039c58983405184f784efed02fb5ae" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.130933 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.137546 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.206980 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8pl4z"] Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.215119 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227019 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227269 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gw2t\" (UniqueName: \"kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227334 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227402 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227457 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227501 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dptgf\" (UniqueName: \"kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227589 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227704 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227791 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227874 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227906 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom\") pod \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\" (UID: \"f8bffa05-4039-4fa4-b173-8fc1cfa492c9\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.227968 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs\") pod \"b715c731-2351-42c5-9f06-d99258f15771\" (UID: \"b715c731-2351-42c5-9f06-d99258f15771\") " Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.242699 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf" (OuterVolumeSpecName: "kube-api-access-dptgf") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "kube-api-access-dptgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.243941 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t" (OuterVolumeSpecName: "kube-api-access-5gw2t") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "kube-api-access-5gw2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.246193 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.248395 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.331242 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.331835 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334905 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334937 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gw2t\" (UniqueName: \"kubernetes.io/projected/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-kube-api-access-5gw2t\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334950 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dptgf\" (UniqueName: \"kubernetes.io/projected/b715c731-2351-42c5-9f06-d99258f15771-kube-api-access-dptgf\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334960 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334969 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.334977 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.380060 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.380723 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.381552 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.397067 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data" (OuterVolumeSpecName: "config-data") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.400407 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f8bffa05-4039-4fa4-b173-8fc1cfa492c9" (UID: "f8bffa05-4039-4fa4-b173-8fc1cfa492c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.412576 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data" (OuterVolumeSpecName: "config-data") pod "b715c731-2351-42c5-9f06-d99258f15771" (UID: "b715c731-2351-42c5-9f06-d99258f15771"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.437605 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.437883 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.437996 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.438084 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8bffa05-4039-4fa4-b173-8fc1cfa492c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.438171 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:48 crc kubenswrapper[4857]: I0318 14:34:48.438290 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b715c731-2351-42c5-9f06-d99258f15771-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.097048 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6756b6568c-jbstd" Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.097395 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8pl4z" event={"ID":"8981fabd-a063-4094-8843-f2f8190b1a50","Type":"ContainerStarted","Data":"0c15e24e01a6d080db3866ea6c16644bd6a40df818eabaccf479a806ce6ab67c"} Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.097476 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6ff5fc6d6f-phz9q" Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.192280 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.192320 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6ff5fc6d6f-phz9q"] Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.200808 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:34:49 crc kubenswrapper[4857]: I0318 14:34:49.212443 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6756b6568c-jbstd"] Mar 18 14:34:51 crc kubenswrapper[4857]: I0318 14:34:51.177205 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b715c731-2351-42c5-9f06-d99258f15771" path="/var/lib/kubelet/pods/b715c731-2351-42c5-9f06-d99258f15771/volumes" Mar 18 14:34:51 crc kubenswrapper[4857]: I0318 14:34:51.178460 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" path="/var/lib/kubelet/pods/f8bffa05-4039-4fa4-b173-8fc1cfa492c9/volumes" Mar 18 14:34:54 crc kubenswrapper[4857]: I0318 14:34:54.195089 4857 generic.go:334] "Generic (PLEG): container finished" podID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" exitCode=0 Mar 18 14:34:54 crc kubenswrapper[4857]: I0318 14:34:54.195171 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dc49b6cff-qkjws" event={"ID":"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e","Type":"ContainerDied","Data":"f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670"} Mar 18 14:34:55 crc kubenswrapper[4857]: I0318 14:34:55.243013 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 18 14:34:55 crc kubenswrapper[4857]: I0318 14:34:55.245172 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="e447043a-8fa6-4b8c-b103-57fd3b484088" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.26:5671: connect: connection refused" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.249537 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dc49b6cff-qkjws" event={"ID":"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e","Type":"ContainerDied","Data":"5e446f36887e2243a7f92967ac600a0b4350cb3b22879fcfd4137cc4723b27ec"} Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.249826 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e446f36887e2243a7f92967ac600a0b4350cb3b22879fcfd4137cc4723b27ec" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.344380 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.437592 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-274fp\" (UniqueName: \"kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp\") pod \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.439219 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom\") pod \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.439549 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle\") pod \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.439830 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data\") pod \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\" (UID: \"52b3c9e1-e408-41b0-87e2-56cccd8d4d5e\") " Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.447938 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" (UID: "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.448618 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp" (OuterVolumeSpecName: "kube-api-access-274fp") pod "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" (UID: "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e"). InnerVolumeSpecName "kube-api-access-274fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.486341 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" (UID: "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.532466 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data" (OuterVolumeSpecName: "config-data") pod "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" (UID: "52b3c9e1-e408-41b0-87e2-56cccd8d4d5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.546834 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-274fp\" (UniqueName: \"kubernetes.io/projected/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-kube-api-access-274fp\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.546874 4857 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.546884 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:56 crc kubenswrapper[4857]: I0318 14:34:56.546893 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.269176 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dc49b6cff-qkjws" Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.271020 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8pl4z" event={"ID":"8981fabd-a063-4094-8843-f2f8190b1a50","Type":"ContainerStarted","Data":"32c278b4a84ea9646703fec60b40298d92c03616423ee8a6e884fcd6ce7b93ac"} Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.301299 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-8pl4z" podStartSLOduration=11.168118388 podStartE2EDuration="19.301260226s" podCreationTimestamp="2026-03-18 14:34:38 +0000 UTC" firstStartedPulling="2026-03-18 14:34:48.214693815 +0000 UTC m=+2072.343822272" lastFinishedPulling="2026-03-18 14:34:56.347835653 +0000 UTC m=+2080.476964110" observedRunningTime="2026-03-18 14:34:57.295181833 +0000 UTC m=+2081.424310290" watchObservedRunningTime="2026-03-18 14:34:57.301260226 +0000 UTC m=+2081.430388683" Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.310348 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6ff5fc6d6f-phz9q" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.235:8004/healthcheck\": dial tcp 10.217.0.235:8004: i/o timeout" Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.331388 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:34:57 crc kubenswrapper[4857]: I0318 14:34:57.350422 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5dc49b6cff-qkjws"] Mar 18 14:34:59 crc kubenswrapper[4857]: I0318 14:34:59.182994 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" path="/var/lib/kubelet/pods/52b3c9e1-e408-41b0-87e2-56cccd8d4d5e/volumes" Mar 18 14:35:04 crc kubenswrapper[4857]: I0318 14:35:04.403959 4857 scope.go:117] "RemoveContainer" containerID="78d456a3a21e5c8ca9ae2080918b669211a92d0f904ad37492029ba929206a8e" Mar 18 14:35:04 crc kubenswrapper[4857]: I0318 14:35:04.621461 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" event={"ID":"055be889-b95b-4aab-8510-682080ae57fc","Type":"ContainerStarted","Data":"2085b3e063e737eb66323f4fb1213390d85cd16d4d0aeacc9e1ebd8f28af2af6"} Mar 18 14:35:04 crc kubenswrapper[4857]: I0318 14:35:04.627110 4857 scope.go:117] "RemoveContainer" containerID="6d3879d227cb7eee1fa08208d8c66b82651f3cda020e3e56abf8ef13c9b4c261" Mar 18 14:35:04 crc kubenswrapper[4857]: I0318 14:35:04.654073 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" podStartSLOduration=3.791358848 podStartE2EDuration="44.654046787s" podCreationTimestamp="2026-03-18 14:34:20 +0000 UTC" firstStartedPulling="2026-03-18 14:34:22.418571106 +0000 UTC m=+2046.547699563" lastFinishedPulling="2026-03-18 14:35:03.281259035 +0000 UTC m=+2087.410387502" observedRunningTime="2026-03-18 14:35:04.640402404 +0000 UTC m=+2088.769530871" watchObservedRunningTime="2026-03-18 14:35:04.654046787 +0000 UTC m=+2088.783175234" Mar 18 14:35:04 crc kubenswrapper[4857]: I0318 14:35:04.877966 4857 scope.go:117] "RemoveContainer" containerID="e7b280aa31ad8500d2907451d0a9345096499dc0024e4a8ff967cecc55c8fd9c" Mar 18 14:35:05 crc kubenswrapper[4857]: I0318 14:35:05.079894 4857 scope.go:117] "RemoveContainer" containerID="f74d2c6f47f24074afea6fe89c42029a055a6cd49c8cf145d1ac86556ac07670" Mar 18 14:35:05 crc kubenswrapper[4857]: I0318 14:35:05.200154 4857 scope.go:117] "RemoveContainer" containerID="c42176e1ec617d90ac802f19bc4cfc8e2a1f36d68d633c24c72832d7ce3c4c1a" Mar 18 14:35:05 crc kubenswrapper[4857]: I0318 14:35:05.246945 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Mar 18 14:35:05 crc kubenswrapper[4857]: I0318 14:35:05.326431 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:11 crc kubenswrapper[4857]: I0318 14:35:11.731507 4857 generic.go:334] "Generic (PLEG): container finished" podID="8981fabd-a063-4094-8843-f2f8190b1a50" containerID="32c278b4a84ea9646703fec60b40298d92c03616423ee8a6e884fcd6ce7b93ac" exitCode=0 Mar 18 14:35:11 crc kubenswrapper[4857]: I0318 14:35:11.731643 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8pl4z" event={"ID":"8981fabd-a063-4094-8843-f2f8190b1a50","Type":"ContainerDied","Data":"32c278b4a84ea9646703fec60b40298d92c03616423ee8a6e884fcd6ce7b93ac"} Mar 18 14:35:11 crc kubenswrapper[4857]: I0318 14:35:11.843560 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="rabbitmq" containerID="cri-o://2ed308be836bd7991f890aa94f9af0da26f437f5b82d59f01acd49062cb12c2f" gracePeriod=604794 Mar 18 14:35:12 crc kubenswrapper[4857]: I0318 14:35:12.065685 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5vwhx"] Mar 18 14:35:12 crc kubenswrapper[4857]: I0318 14:35:12.079526 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5vwhx"] Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.046460 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8r2f9"] Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.064890 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-8r2f9"] Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.186697 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5cc680-f973-4abe-a161-a19ac4036406" path="/var/lib/kubelet/pods/3a5cc680-f973-4abe-a161-a19ac4036406/volumes" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.193147 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f18e55-a740-4fec-9739-82062db6f9d8" path="/var/lib/kubelet/pods/78f18e55-a740-4fec-9739-82062db6f9d8/volumes" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.367479 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.435163 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data\") pod \"8981fabd-a063-4094-8843-f2f8190b1a50\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.435428 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfsfs\" (UniqueName: \"kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs\") pod \"8981fabd-a063-4094-8843-f2f8190b1a50\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.435542 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle\") pod \"8981fabd-a063-4094-8843-f2f8190b1a50\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.435574 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts\") pod \"8981fabd-a063-4094-8843-f2f8190b1a50\" (UID: \"8981fabd-a063-4094-8843-f2f8190b1a50\") " Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.456911 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts" (OuterVolumeSpecName: "scripts") pod "8981fabd-a063-4094-8843-f2f8190b1a50" (UID: "8981fabd-a063-4094-8843-f2f8190b1a50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.457022 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs" (OuterVolumeSpecName: "kube-api-access-qfsfs") pod "8981fabd-a063-4094-8843-f2f8190b1a50" (UID: "8981fabd-a063-4094-8843-f2f8190b1a50"). InnerVolumeSpecName "kube-api-access-qfsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.475108 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data" (OuterVolumeSpecName: "config-data") pod "8981fabd-a063-4094-8843-f2f8190b1a50" (UID: "8981fabd-a063-4094-8843-f2f8190b1a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.493496 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8981fabd-a063-4094-8843-f2f8190b1a50" (UID: "8981fabd-a063-4094-8843-f2f8190b1a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.538861 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.538895 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.538908 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfsfs\" (UniqueName: \"kubernetes.io/projected/8981fabd-a063-4094-8843-f2f8190b1a50-kube-api-access-qfsfs\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.538920 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8981fabd-a063-4094-8843-f2f8190b1a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.763456 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8pl4z" event={"ID":"8981fabd-a063-4094-8843-f2f8190b1a50","Type":"ContainerDied","Data":"0c15e24e01a6d080db3866ea6c16644bd6a40df818eabaccf479a806ce6ab67c"} Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.763742 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c15e24e01a6d080db3866ea6c16644bd6a40df818eabaccf479a806ce6ab67c" Mar 18 14:35:13 crc kubenswrapper[4857]: I0318 14:35:13.763552 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8pl4z" Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.065381 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-s5fvr"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.087837 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2499-account-create-update-j6xhq"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.109099 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-6779-account-create-update-4dxfv"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.120012 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-s5fvr"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.129438 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-6779-account-create-update-4dxfv"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.139254 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2499-account-create-update-j6xhq"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.168306 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9zfmn"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.168385 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9zfmn"] Mar 18 14:35:14 crc kubenswrapper[4857]: I0318 14:35:14.168404 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f0bc-account-create-update-4lkqr"] Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:14.195301 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0a9b-account-create-update-6lftr"] Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:14.195366 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f0bc-account-create-update-4lkqr"] Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:14.195385 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0a9b-account-create-update-6lftr"] Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.879270 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530" path="/var/lib/kubelet/pods/2ab70772-1a0c-40b3-a4f4-1ff2fa7b0530/volumes" Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.885359 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781bd548-5b56-4f74-b1a2-2228b7890b3a" path="/var/lib/kubelet/pods/781bd548-5b56-4f74-b1a2-2228b7890b3a/volumes" Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.887770 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943237f4-af1c-4d28-a5e1-5dc93d0d2c71" path="/var/lib/kubelet/pods/943237f4-af1c-4d28-a5e1-5dc93d0d2c71/volumes" Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.898649 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bded09f-2eca-4e52-b648-a21c151b61b6" path="/var/lib/kubelet/pods/9bded09f-2eca-4e52-b648-a21c151b61b6/volumes" Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.900185 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a515a015-c680-4c7b-bdd6-ce46602b7e30" path="/var/lib/kubelet/pods/a515a015-c680-4c7b-bdd6-ce46602b7e30/volumes" Mar 18 14:35:15 crc kubenswrapper[4857]: I0318 14:35:15.900971 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9" path="/var/lib/kubelet/pods/b3dd0ed1-224e-4d0f-9e41-ef7b140c78f9/volumes" Mar 18 14:35:18 crc kubenswrapper[4857]: I0318 14:35:18.404901 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:18 crc kubenswrapper[4857]: I0318 14:35:18.407323 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-listener" containerID="cri-o://166c5ec08e7a1e68f138018b11f4e563da7009781cdd1bce6e2a39ba75d2d83e" gracePeriod=30 Mar 18 14:35:18 crc kubenswrapper[4857]: I0318 14:35:18.407387 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-notifier" containerID="cri-o://dd3296fad301f43c8c768b86035d0e6162d083d8f944cb005583834f0257d7c7" gracePeriod=30 Mar 18 14:35:18 crc kubenswrapper[4857]: I0318 14:35:18.407353 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-evaluator" containerID="cri-o://e42e23a9e6663fd00168c3b8b1bd4c9c5deaed813117796fe02823e255b40b61" gracePeriod=30 Mar 18 14:35:18 crc kubenswrapper[4857]: I0318 14:35:18.407261 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-api" containerID="cri-o://7cc28e84be6f9b23535bf1612ea4d9789f303bb0bf04ee415092192c0b3c2dd2" gracePeriod=30 Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.486009 4857 generic.go:334] "Generic (PLEG): container finished" podID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerID="e42e23a9e6663fd00168c3b8b1bd4c9c5deaed813117796fe02823e255b40b61" exitCode=0 Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.486436 4857 generic.go:334] "Generic (PLEG): container finished" podID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerID="7cc28e84be6f9b23535bf1612ea4d9789f303bb0bf04ee415092192c0b3c2dd2" exitCode=0 Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.486104 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerDied","Data":"e42e23a9e6663fd00168c3b8b1bd4c9c5deaed813117796fe02823e255b40b61"} Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.486556 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerDied","Data":"7cc28e84be6f9b23535bf1612ea4d9789f303bb0bf04ee415092192c0b3c2dd2"} Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.490185 4857 generic.go:334] "Generic (PLEG): container finished" podID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerID="2ed308be836bd7991f890aa94f9af0da26f437f5b82d59f01acd49062cb12c2f" exitCode=0 Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.490236 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerDied","Data":"2ed308be836bd7991f890aa94f9af0da26f437f5b82d59f01acd49062cb12c2f"} Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.490271 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"83d0525c-c26a-4aae-ac6c-40c625cf5d37","Type":"ContainerDied","Data":"05fe8b517771536f54ac7f77640440a6ac64214356b1fe60f95dd49ab41c31c3"} Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.490291 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fe8b517771536f54ac7f77640440a6ac64214356b1fe60f95dd49ab41c31c3" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.595533 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.716741 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.716814 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.716950 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.716985 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717073 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717112 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717227 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717282 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx5z6\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717334 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.717390 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.721841 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\" (UID: \"83d0525c-c26a-4aae-ac6c-40c625cf5d37\") " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.724084 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.728046 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.729349 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.739067 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.740350 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.767919 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info" (OuterVolumeSpecName: "pod-info") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.775826 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6" (OuterVolumeSpecName: "kube-api-access-vx5z6") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "kube-api-access-vx5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.803528 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2" (OuterVolumeSpecName: "persistence") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "pvc-dae6287d-2084-4658-86a8-903a6ce996c2". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.804655 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf" (OuterVolumeSpecName: "server-conf") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.819435 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data" (OuterVolumeSpecName: "config-data") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830269 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830312 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx5z6\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-kube-api-access-vx5z6\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830324 4857 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83d0525c-c26a-4aae-ac6c-40c625cf5d37-pod-info\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830333 4857 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83d0525c-c26a-4aae-ac6c-40c625cf5d37-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830397 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") on node \"crc\" " Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830420 4857 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-server-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830434 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830445 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830456 4857 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83d0525c-c26a-4aae-ac6c-40c625cf5d37-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.830466 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.883977 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.884176 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dae6287d-2084-4658-86a8-903a6ce996c2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2") on node "crc" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.932789 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:19 crc kubenswrapper[4857]: I0318 14:35:19.934645 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "83d0525c-c26a-4aae-ac6c-40c625cf5d37" (UID: "83d0525c-c26a-4aae-ac6c-40c625cf5d37"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.035193 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83d0525c-c26a-4aae-ac6c-40c625cf5d37-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.507057 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.553565 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.578268 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.603136 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.603994 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604038 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.604072 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="rabbitmq" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604082 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="rabbitmq" Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.604102 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604110 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.604152 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604165 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.604186 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8981fabd-a063-4094-8843-f2f8190b1a50" containerName="aodh-db-sync" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604197 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8981fabd-a063-4094-8843-f2f8190b1a50" containerName="aodh-db-sync" Mar 18 14:35:20 crc kubenswrapper[4857]: E0318 14:35:20.604222 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="setup-container" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604230 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="setup-container" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604593 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b715c731-2351-42c5-9f06-d99258f15771" containerName="heat-cfnapi" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604636 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" containerName="rabbitmq" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604652 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8bffa05-4039-4fa4-b173-8fc1cfa492c9" containerName="heat-api" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604679 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8981fabd-a063-4094-8843-f2f8190b1a50" containerName="aodh-db-sync" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.604692 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b3c9e1-e408-41b0-87e2-56cccd8d4d5e" containerName="heat-engine" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.606627 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.661818 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.761881 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-config-data\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762227 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762255 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762287 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762311 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762357 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762432 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762455 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762509 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8mw\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-kube-api-access-9g8mw\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762635 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.762662 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865379 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865466 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865636 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-config-data\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865675 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865703 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865768 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865805 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865842 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865904 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.865940 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.866034 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g8mw\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-kube-api-access-9g8mw\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.866196 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.867171 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.867396 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.868389 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-config-data\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.869042 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.873131 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.873187 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7cc840d4d19da8ffddf11bfbc2594b044fc276a15e3ae8ac00eb9baebd04c7ec/globalmount\"" pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.873423 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.874104 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.874974 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.886664 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:20 crc kubenswrapper[4857]: I0318 14:35:20.888496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g8mw\" (UniqueName: \"kubernetes.io/projected/bffd47eb-3c88-41b8-bda7-f885b44d3ee8-kube-api-access-9g8mw\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.042453 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dae6287d-2084-4658-86a8-903a6ce996c2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dae6287d-2084-4658-86a8-903a6ce996c2\") pod \"rabbitmq-server-1\" (UID: \"bffd47eb-3c88-41b8-bda7-f885b44d3ee8\") " pod="openstack/rabbitmq-server-1" Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.048622 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b97d-account-create-update-8zhjd"] Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.064137 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b97d-account-create-update-8zhjd"] Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.180063 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508c9be6-0f5e-47ba-b48b-0d28dbf92af3" path="/var/lib/kubelet/pods/508c9be6-0f5e-47ba-b48b-0d28dbf92af3/volumes" Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.185355 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d0525c-c26a-4aae-ac6c-40c625cf5d37" path="/var/lib/kubelet/pods/83d0525c-c26a-4aae-ac6c-40c625cf5d37/volumes" Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.229633 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Mar 18 14:35:21 crc kubenswrapper[4857]: I0318 14:35:21.793496 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Mar 18 14:35:21 crc kubenswrapper[4857]: W0318 14:35:21.801039 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbffd47eb_3c88_41b8_bda7_f885b44d3ee8.slice/crio-aafd456c7ff4f4f143a5e2ed78a6b9706f1f93b5e510f912e3dc8eafe5cd1f51 WatchSource:0}: Error finding container aafd456c7ff4f4f143a5e2ed78a6b9706f1f93b5e510f912e3dc8eafe5cd1f51: Status 404 returned error can't find the container with id aafd456c7ff4f4f143a5e2ed78a6b9706f1f93b5e510f912e3dc8eafe5cd1f51 Mar 18 14:35:22 crc kubenswrapper[4857]: I0318 14:35:22.046970 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc"] Mar 18 14:35:22 crc kubenswrapper[4857]: I0318 14:35:22.059273 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-8d7cc"] Mar 18 14:35:22 crc kubenswrapper[4857]: I0318 14:35:22.536887 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"bffd47eb-3c88-41b8-bda7-f885b44d3ee8","Type":"ContainerStarted","Data":"aafd456c7ff4f4f143a5e2ed78a6b9706f1f93b5e510f912e3dc8eafe5cd1f51"} Mar 18 14:35:23 crc kubenswrapper[4857]: I0318 14:35:23.189426 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea36fa5-0ed9-4931-b618-f1731d9bfe49" path="/var/lib/kubelet/pods/bea36fa5-0ed9-4931-b618-f1731d9bfe49/volumes" Mar 18 14:35:23 crc kubenswrapper[4857]: I0318 14:35:23.549595 4857 generic.go:334] "Generic (PLEG): container finished" podID="055be889-b95b-4aab-8510-682080ae57fc" containerID="2085b3e063e737eb66323f4fb1213390d85cd16d4d0aeacc9e1ebd8f28af2af6" exitCode=0 Mar 18 14:35:23 crc kubenswrapper[4857]: I0318 14:35:23.549660 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" event={"ID":"055be889-b95b-4aab-8510-682080ae57fc","Type":"ContainerDied","Data":"2085b3e063e737eb66323f4fb1213390d85cd16d4d0aeacc9e1ebd8f28af2af6"} Mar 18 14:35:23 crc kubenswrapper[4857]: I0318 14:35:23.553266 4857 generic.go:334] "Generic (PLEG): container finished" podID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerID="dd3296fad301f43c8c768b86035d0e6162d083d8f944cb005583834f0257d7c7" exitCode=0 Mar 18 14:35:23 crc kubenswrapper[4857]: I0318 14:35:23.553305 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerDied","Data":"dd3296fad301f43c8c768b86035d0e6162d083d8f944cb005583834f0257d7c7"} Mar 18 14:35:24 crc kubenswrapper[4857]: I0318 14:35:24.571900 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"bffd47eb-3c88-41b8-bda7-f885b44d3ee8","Type":"ContainerStarted","Data":"2ff33b53dbc7b391c559dad629fae48511ef3e605dbe7d0b6e55f062c212d1f8"} Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.166701 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.264254 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory\") pod \"055be889-b95b-4aab-8510-682080ae57fc\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.264572 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam\") pod \"055be889-b95b-4aab-8510-682080ae57fc\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.264730 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj4v9\" (UniqueName: \"kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9\") pod \"055be889-b95b-4aab-8510-682080ae57fc\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.264853 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle\") pod \"055be889-b95b-4aab-8510-682080ae57fc\" (UID: \"055be889-b95b-4aab-8510-682080ae57fc\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.274962 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "055be889-b95b-4aab-8510-682080ae57fc" (UID: "055be889-b95b-4aab-8510-682080ae57fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.275240 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9" (OuterVolumeSpecName: "kube-api-access-cj4v9") pod "055be889-b95b-4aab-8510-682080ae57fc" (UID: "055be889-b95b-4aab-8510-682080ae57fc"). InnerVolumeSpecName "kube-api-access-cj4v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.308925 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "055be889-b95b-4aab-8510-682080ae57fc" (UID: "055be889-b95b-4aab-8510-682080ae57fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.337297 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory" (OuterVolumeSpecName: "inventory") pod "055be889-b95b-4aab-8510-682080ae57fc" (UID: "055be889-b95b-4aab-8510-682080ae57fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.368123 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj4v9\" (UniqueName: \"kubernetes.io/projected/055be889-b95b-4aab-8510-682080ae57fc-kube-api-access-cj4v9\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.368161 4857 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.368175 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.368186 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/055be889-b95b-4aab-8510-682080ae57fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.643459 4857 generic.go:334] "Generic (PLEG): container finished" podID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerID="166c5ec08e7a1e68f138018b11f4e563da7009781cdd1bce6e2a39ba75d2d83e" exitCode=0 Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.643576 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerDied","Data":"166c5ec08e7a1e68f138018b11f4e563da7009781cdd1bce6e2a39ba75d2d83e"} Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.668274 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.668903 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw" event={"ID":"055be889-b95b-4aab-8510-682080ae57fc","Type":"ContainerDied","Data":"dbb09e5cfff0ff90565d64c1f17fcde4cb85f05c1083650f114515e05b6300a8"} Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.668961 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb09e5cfff0ff90565d64c1f17fcde4cb85f05c1083650f114515e05b6300a8" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.798449 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8"] Mar 18 14:35:25 crc kubenswrapper[4857]: E0318 14:35:25.799100 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055be889-b95b-4aab-8510-682080ae57fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.799115 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="055be889-b95b-4aab-8510-682080ae57fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.799410 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="055be889-b95b-4aab-8510-682080ae57fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.800546 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.810065 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.810356 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.810708 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.811538 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8"] Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.811707 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.841903 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.895498 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.895998 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.896124 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.896355 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t424v\" (UniqueName: \"kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.896453 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.896651 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle\") pod \"343e2b57-18ae-4935-95c3-2cedf23db40d\" (UID: \"343e2b57-18ae-4935-95c3-2cedf23db40d\") " Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.897326 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.897580 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.897723 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2ml\" (UniqueName: \"kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.902522 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v" (OuterVolumeSpecName: "kube-api-access-t424v") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "kube-api-access-t424v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.903312 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts" (OuterVolumeSpecName: "scripts") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:25 crc kubenswrapper[4857]: I0318 14:35:25.977088 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.005863 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.005975 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.006020 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2ml\" (UniqueName: \"kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.006172 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t424v\" (UniqueName: \"kubernetes.io/projected/343e2b57-18ae-4935-95c3-2cedf23db40d-kube-api-access-t424v\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.006187 4857 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-scripts\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.006197 4857 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.010442 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.010973 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.011862 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.030372 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2ml\" (UniqueName: \"kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-t98d8\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.048456 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data" (OuterVolumeSpecName: "config-data") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.091951 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "343e2b57-18ae-4935-95c3-2cedf23db40d" (UID: "343e2b57-18ae-4935-95c3-2cedf23db40d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.109859 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.109913 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.109935 4857 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/343e2b57-18ae-4935-95c3-2cedf23db40d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.157294 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.683120 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"343e2b57-18ae-4935-95c3-2cedf23db40d","Type":"ContainerDied","Data":"73dae512f03fbc5f6ef83f64bf8768e16e3bee197cbde410d9fbdc54121fe3b9"} Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.683445 4857 scope.go:117] "RemoveContainer" containerID="166c5ec08e7a1e68f138018b11f4e563da7009781cdd1bce6e2a39ba75d2d83e" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.683202 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.727829 4857 scope.go:117] "RemoveContainer" containerID="dd3296fad301f43c8c768b86035d0e6162d083d8f944cb005583834f0257d7c7" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.757671 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.780288 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.781104 4857 scope.go:117] "RemoveContainer" containerID="e42e23a9e6663fd00168c3b8b1bd4c9c5deaed813117796fe02823e255b40b61" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.796504 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:26 crc kubenswrapper[4857]: E0318 14:35:26.796992 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-notifier" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797014 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-notifier" Mar 18 14:35:26 crc kubenswrapper[4857]: E0318 14:35:26.797040 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-evaluator" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797047 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-evaluator" Mar 18 14:35:26 crc kubenswrapper[4857]: E0318 14:35:26.797087 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-listener" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797095 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-listener" Mar 18 14:35:26 crc kubenswrapper[4857]: E0318 14:35:26.797111 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-api" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797117 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-api" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797339 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-listener" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797360 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-api" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797377 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-evaluator" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.797393 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" containerName="aodh-notifier" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.799732 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.802575 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-fvfqd" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.803773 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.803903 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.804039 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.804799 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.809735 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.826104 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8"] Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.861520 4857 scope.go:117] "RemoveContainer" containerID="7cc28e84be6f9b23535bf1612ea4d9789f303bb0bf04ee415092192c0b3c2dd2" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940001 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-scripts\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940539 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-config-data\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940588 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-public-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940690 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940714 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfk7\" (UniqueName: \"kubernetes.io/projected/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-kube-api-access-kwfk7\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:26 crc kubenswrapper[4857]: I0318 14:35:26.940780 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.038468 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.038546 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043141 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-scripts\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043281 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-config-data\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043311 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-public-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043355 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043373 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwfk7\" (UniqueName: \"kubernetes.io/projected/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-kube-api-access-kwfk7\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.043396 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.049604 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-config-data\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.050207 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.050231 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-public-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.050673 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-scripts\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.051160 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-internal-tls-certs\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.074284 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwfk7\" (UniqueName: \"kubernetes.io/projected/c6880f18-f2cd-43fa-8ef7-8f0d89744e3c-kube-api-access-kwfk7\") pod \"aodh-0\" (UID: \"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c\") " pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.179850 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.197240 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343e2b57-18ae-4935-95c3-2cedf23db40d" path="/var/lib/kubelet/pods/343e2b57-18ae-4935-95c3-2cedf23db40d/volumes" Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.699764 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" event={"ID":"527d9c47-3f89-4cf8-a69e-a522189755e1","Type":"ContainerStarted","Data":"f86c9bd8a98aa81cb059ed8baec17821c9b5f471123f215e0f6fe46bf755ee4b"} Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.700218 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" event={"ID":"527d9c47-3f89-4cf8-a69e-a522189755e1","Type":"ContainerStarted","Data":"0d565de41d9fd5f71a7158ce1794dc66b27599bf25621802e8f63b86ccc700fb"} Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.731945 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" podStartSLOduration=2.2523935760000002 podStartE2EDuration="2.731916169s" podCreationTimestamp="2026-03-18 14:35:25 +0000 UTC" firstStartedPulling="2026-03-18 14:35:26.822330158 +0000 UTC m=+2110.951458615" lastFinishedPulling="2026-03-18 14:35:27.301852741 +0000 UTC m=+2111.430981208" observedRunningTime="2026-03-18 14:35:27.715176207 +0000 UTC m=+2111.844304654" watchObservedRunningTime="2026-03-18 14:35:27.731916169 +0000 UTC m=+2111.861044626" Mar 18 14:35:27 crc kubenswrapper[4857]: W0318 14:35:27.795025 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6880f18_f2cd_43fa_8ef7_8f0d89744e3c.slice/crio-7fb4c812060df622090dff30a9cb8fa9d109f003d0cb44c98419deeac35e16ba WatchSource:0}: Error finding container 7fb4c812060df622090dff30a9cb8fa9d109f003d0cb44c98419deeac35e16ba: Status 404 returned error can't find the container with id 7fb4c812060df622090dff30a9cb8fa9d109f003d0cb44c98419deeac35e16ba Mar 18 14:35:27 crc kubenswrapper[4857]: I0318 14:35:27.799209 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 18 14:35:28 crc kubenswrapper[4857]: I0318 14:35:28.719867 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c","Type":"ContainerStarted","Data":"e38b4371d9bbdd2704a4cfb01f3f0f93296832a5c1d580df88e1ba627da46d27"} Mar 18 14:35:28 crc kubenswrapper[4857]: I0318 14:35:28.720189 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c","Type":"ContainerStarted","Data":"7fb4c812060df622090dff30a9cb8fa9d109f003d0cb44c98419deeac35e16ba"} Mar 18 14:35:31 crc kubenswrapper[4857]: I0318 14:35:31.775245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c","Type":"ContainerStarted","Data":"578ccef7b9f815d9c97aa77688374d0386c9d4a627107f7735ccba8876411dbd"} Mar 18 14:35:31 crc kubenswrapper[4857]: I0318 14:35:31.778675 4857 generic.go:334] "Generic (PLEG): container finished" podID="527d9c47-3f89-4cf8-a69e-a522189755e1" containerID="f86c9bd8a98aa81cb059ed8baec17821c9b5f471123f215e0f6fe46bf755ee4b" exitCode=0 Mar 18 14:35:31 crc kubenswrapper[4857]: I0318 14:35:31.778878 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" event={"ID":"527d9c47-3f89-4cf8-a69e-a522189755e1","Type":"ContainerDied","Data":"f86c9bd8a98aa81cb059ed8baec17821c9b5f471123f215e0f6fe46bf755ee4b"} Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.397733 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.464915 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam\") pod \"527d9c47-3f89-4cf8-a69e-a522189755e1\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.465153 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p2ml\" (UniqueName: \"kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml\") pod \"527d9c47-3f89-4cf8-a69e-a522189755e1\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.465255 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory\") pod \"527d9c47-3f89-4cf8-a69e-a522189755e1\" (UID: \"527d9c47-3f89-4cf8-a69e-a522189755e1\") " Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.475558 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml" (OuterVolumeSpecName: "kube-api-access-2p2ml") pod "527d9c47-3f89-4cf8-a69e-a522189755e1" (UID: "527d9c47-3f89-4cf8-a69e-a522189755e1"). InnerVolumeSpecName "kube-api-access-2p2ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.511264 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "527d9c47-3f89-4cf8-a69e-a522189755e1" (UID: "527d9c47-3f89-4cf8-a69e-a522189755e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.511671 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory" (OuterVolumeSpecName: "inventory") pod "527d9c47-3f89-4cf8-a69e-a522189755e1" (UID: "527d9c47-3f89-4cf8-a69e-a522189755e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.568895 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p2ml\" (UniqueName: \"kubernetes.io/projected/527d9c47-3f89-4cf8-a69e-a522189755e1-kube-api-access-2p2ml\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.568946 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.568963 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/527d9c47-3f89-4cf8-a69e-a522189755e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.805764 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" event={"ID":"527d9c47-3f89-4cf8-a69e-a522189755e1","Type":"ContainerDied","Data":"0d565de41d9fd5f71a7158ce1794dc66b27599bf25621802e8f63b86ccc700fb"} Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.805803 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-t98d8" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.805821 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d565de41d9fd5f71a7158ce1794dc66b27599bf25621802e8f63b86ccc700fb" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.897946 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq"] Mar 18 14:35:33 crc kubenswrapper[4857]: E0318 14:35:33.898832 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d9c47-3f89-4cf8-a69e-a522189755e1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.898856 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d9c47-3f89-4cf8-a69e-a522189755e1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.899251 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="527d9c47-3f89-4cf8-a69e-a522189755e1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.903872 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.909330 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.909878 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.909884 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.910184 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.954143 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq"] Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.979301 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z268f\" (UniqueName: \"kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.979688 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.979855 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:33 crc kubenswrapper[4857]: I0318 14:35:33.980037 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.083671 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z268f\" (UniqueName: \"kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.084514 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.084585 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.084706 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.089825 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.090337 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.090823 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.103566 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z268f\" (UniqueName: \"kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.224698 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.831322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c","Type":"ContainerStarted","Data":"b8e4ea017461ef4ff7e5a6c0d95b02685f119bc05da8beeb036b51bd6285fd6a"} Mar 18 14:35:34 crc kubenswrapper[4857]: I0318 14:35:34.888208 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq"] Mar 18 14:35:35 crc kubenswrapper[4857]: W0318 14:35:35.052930 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf20fa8fb_3d4b_40c1_bcc4_6e5f7a362941.slice/crio-ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5 WatchSource:0}: Error finding container ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5: Status 404 returned error can't find the container with id ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5 Mar 18 14:35:35 crc kubenswrapper[4857]: I0318 14:35:35.851884 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" event={"ID":"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941","Type":"ContainerStarted","Data":"ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5"} Mar 18 14:35:35 crc kubenswrapper[4857]: I0318 14:35:35.854639 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c6880f18-f2cd-43fa-8ef7-8f0d89744e3c","Type":"ContainerStarted","Data":"f4d09e09c554d2c80cfd3cb7f8223c0095010cef534393d496974b914b02c561"} Mar 18 14:35:35 crc kubenswrapper[4857]: I0318 14:35:35.877080 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.568271866 podStartE2EDuration="9.877057349s" podCreationTimestamp="2026-03-18 14:35:26 +0000 UTC" firstStartedPulling="2026-03-18 14:35:27.798008983 +0000 UTC m=+2111.927137440" lastFinishedPulling="2026-03-18 14:35:35.106794466 +0000 UTC m=+2119.235922923" observedRunningTime="2026-03-18 14:35:35.875685814 +0000 UTC m=+2120.004814281" watchObservedRunningTime="2026-03-18 14:35:35.877057349 +0000 UTC m=+2120.006185816" Mar 18 14:35:36 crc kubenswrapper[4857]: I0318 14:35:36.876437 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" event={"ID":"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941","Type":"ContainerStarted","Data":"b27153d8500db813297369aa1149607c26b10d0e1f4bef5b98cc0cc172841489"} Mar 18 14:35:36 crc kubenswrapper[4857]: I0318 14:35:36.905315 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" podStartSLOduration=2.44063183 podStartE2EDuration="3.905283086s" podCreationTimestamp="2026-03-18 14:35:33 +0000 UTC" firstStartedPulling="2026-03-18 14:35:35.060366427 +0000 UTC m=+2119.189494884" lastFinishedPulling="2026-03-18 14:35:36.525017693 +0000 UTC m=+2120.654146140" observedRunningTime="2026-03-18 14:35:36.899154751 +0000 UTC m=+2121.028283228" watchObservedRunningTime="2026-03-18 14:35:36.905283086 +0000 UTC m=+2121.034411553" Mar 18 14:35:39 crc kubenswrapper[4857]: I0318 14:35:39.053811 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6t5nq"] Mar 18 14:35:39 crc kubenswrapper[4857]: I0318 14:35:39.073266 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6t5nq"] Mar 18 14:35:39 crc kubenswrapper[4857]: I0318 14:35:39.198823 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ca8507-d904-4c86-b90e-7348e4e0d0e9" path="/var/lib/kubelet/pods/70ca8507-d904-4c86-b90e-7348e4e0d0e9/volumes" Mar 18 14:35:48 crc kubenswrapper[4857]: I0318 14:35:48.052797 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cf79-account-create-update-qkb2x"] Mar 18 14:35:48 crc kubenswrapper[4857]: I0318 14:35:48.070278 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cf79-account-create-update-qkb2x"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.050982 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-t9srg"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.069445 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-t9srg"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.082387 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-c8wq4"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.103166 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6a01-account-create-update-bx5tc"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.116026 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-af02-account-create-update-bxg27"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.130465 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2hgxn"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.142845 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-c8wq4"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.162178 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sclfz"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.193005 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da3dc31-b98c-4d11-8837-96fe5c7d8398" path="/var/lib/kubelet/pods/1da3dc31-b98c-4d11-8837-96fe5c7d8398/volumes" Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.197796 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3ed0c05-eea8-4b99-80bc-f4cee9075f8a" path="/var/lib/kubelet/pods/a3ed0c05-eea8-4b99-80bc-f4cee9075f8a/volumes" Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.201719 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab5d828-1730-4e36-a0a4-57704e03f6d9" path="/var/lib/kubelet/pods/bab5d828-1730-4e36-a0a4-57704e03f6d9/volumes" Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.203280 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2hgxn"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.203333 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6a01-account-create-update-bx5tc"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.205032 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fbf8-account-create-update-hvlxn"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.218901 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fbf8-account-create-update-hvlxn"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.231051 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sclfz"] Mar 18 14:35:49 crc kubenswrapper[4857]: I0318 14:35:49.242501 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-af02-account-create-update-bxg27"] Mar 18 14:35:51 crc kubenswrapper[4857]: I0318 14:35:51.187425 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b0f817-e54c-4f5a-89fb-026c01540ea8" path="/var/lib/kubelet/pods/01b0f817-e54c-4f5a-89fb-026c01540ea8/volumes" Mar 18 14:35:51 crc kubenswrapper[4857]: I0318 14:35:51.192711 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b313e17-3867-49ca-81b4-a35f89dd5b12" path="/var/lib/kubelet/pods/6b313e17-3867-49ca-81b4-a35f89dd5b12/volumes" Mar 18 14:35:51 crc kubenswrapper[4857]: I0318 14:35:51.196250 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b3029f-dd60-425d-b002-6f1b9a6af1b2" path="/var/lib/kubelet/pods/88b3029f-dd60-425d-b002-6f1b9a6af1b2/volumes" Mar 18 14:35:51 crc kubenswrapper[4857]: I0318 14:35:51.198407 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01853ca-d154-4247-b6b5-d0af7407921d" path="/var/lib/kubelet/pods/d01853ca-d154-4247-b6b5-d0af7407921d/volumes" Mar 18 14:35:51 crc kubenswrapper[4857]: I0318 14:35:51.201064 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1c4712-6135-41e6-9535-569379422bd7" path="/var/lib/kubelet/pods/fe1c4712-6135-41e6-9535-569379422bd7/volumes" Mar 18 14:35:54 crc kubenswrapper[4857]: I0318 14:35:54.076356 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kj5zp"] Mar 18 14:35:54 crc kubenswrapper[4857]: I0318 14:35:54.088615 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kj5zp"] Mar 18 14:35:55 crc kubenswrapper[4857]: I0318 14:35:55.188882 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db23dd3d-8bc7-41ba-9e68-888a9ddb984a" path="/var/lib/kubelet/pods/db23dd3d-8bc7-41ba-9e68-888a9ddb984a/volumes" Mar 18 14:35:57 crc kubenswrapper[4857]: I0318 14:35:57.039582 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:35:57 crc kubenswrapper[4857]: I0318 14:35:57.040203 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:35:57 crc kubenswrapper[4857]: I0318 14:35:57.217446 4857 generic.go:334] "Generic (PLEG): container finished" podID="bffd47eb-3c88-41b8-bda7-f885b44d3ee8" containerID="2ff33b53dbc7b391c559dad629fae48511ef3e605dbe7d0b6e55f062c212d1f8" exitCode=0 Mar 18 14:35:57 crc kubenswrapper[4857]: I0318 14:35:57.217594 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"bffd47eb-3c88-41b8-bda7-f885b44d3ee8","Type":"ContainerDied","Data":"2ff33b53dbc7b391c559dad629fae48511ef3e605dbe7d0b6e55f062c212d1f8"} Mar 18 14:35:58 crc kubenswrapper[4857]: I0318 14:35:58.240562 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"bffd47eb-3c88-41b8-bda7-f885b44d3ee8","Type":"ContainerStarted","Data":"0c5f04efc4248b5ef2dedf7f9e6718f01c9769fa911c0299f81bd1975d1c6447"} Mar 18 14:35:58 crc kubenswrapper[4857]: I0318 14:35:58.241235 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Mar 18 14:35:58 crc kubenswrapper[4857]: I0318 14:35:58.301463 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=38.301427176 podStartE2EDuration="38.301427176s" podCreationTimestamp="2026-03-18 14:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:35:58.271745177 +0000 UTC m=+2142.400873664" watchObservedRunningTime="2026-03-18 14:35:58.301427176 +0000 UTC m=+2142.430555633" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.047117 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-v2q2q"] Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.063723 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-v2q2q"] Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.147046 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564076-xlvb5"] Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.150725 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.154010 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.154395 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.161104 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564076-xlvb5"] Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.161678 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.298663 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrgx\" (UniqueName: \"kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx\") pod \"auto-csr-approver-29564076-xlvb5\" (UID: \"8b83a605-c328-47c0-bada-aa7a6f12bfaf\") " pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.704673 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrgx\" (UniqueName: \"kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx\") pod \"auto-csr-approver-29564076-xlvb5\" (UID: \"8b83a605-c328-47c0-bada-aa7a6f12bfaf\") " pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.762865 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrgx\" (UniqueName: \"kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx\") pod \"auto-csr-approver-29564076-xlvb5\" (UID: \"8b83a605-c328-47c0-bada-aa7a6f12bfaf\") " pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:00 crc kubenswrapper[4857]: I0318 14:36:00.806904 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:01 crc kubenswrapper[4857]: I0318 14:36:01.183337 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec10534a-1292-409a-adff-ecfac639275f" path="/var/lib/kubelet/pods/ec10534a-1292-409a-adff-ecfac639275f/volumes" Mar 18 14:36:01 crc kubenswrapper[4857]: I0318 14:36:01.842152 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564076-xlvb5"] Mar 18 14:36:02 crc kubenswrapper[4857]: I0318 14:36:02.305543 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" event={"ID":"8b83a605-c328-47c0-bada-aa7a6f12bfaf","Type":"ContainerStarted","Data":"91c832cd7b4c5bd9e46c999f43f0bd99dd356fb618e67c4026af7630d8202691"} Mar 18 14:36:04 crc kubenswrapper[4857]: I0318 14:36:04.490427 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" event={"ID":"8b83a605-c328-47c0-bada-aa7a6f12bfaf","Type":"ContainerStarted","Data":"6331644b7650bd87f6a0d1e7f180675cd7dcd33ce8b7d2b421c5dada73793682"} Mar 18 14:36:04 crc kubenswrapper[4857]: I0318 14:36:04.539597 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" podStartSLOduration=3.218403624 podStartE2EDuration="4.539576473s" podCreationTimestamp="2026-03-18 14:36:00 +0000 UTC" firstStartedPulling="2026-03-18 14:36:01.874198703 +0000 UTC m=+2146.003327160" lastFinishedPulling="2026-03-18 14:36:03.195371552 +0000 UTC m=+2147.324500009" observedRunningTime="2026-03-18 14:36:04.517821634 +0000 UTC m=+2148.646950091" watchObservedRunningTime="2026-03-18 14:36:04.539576473 +0000 UTC m=+2148.668704930" Mar 18 14:36:06 crc kubenswrapper[4857]: I0318 14:36:06.518775 4857 generic.go:334] "Generic (PLEG): container finished" podID="8b83a605-c328-47c0-bada-aa7a6f12bfaf" containerID="6331644b7650bd87f6a0d1e7f180675cd7dcd33ce8b7d2b421c5dada73793682" exitCode=0 Mar 18 14:36:06 crc kubenswrapper[4857]: I0318 14:36:06.518901 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" event={"ID":"8b83a605-c328-47c0-bada-aa7a6f12bfaf","Type":"ContainerDied","Data":"6331644b7650bd87f6a0d1e7f180675cd7dcd33ce8b7d2b421c5dada73793682"} Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.036905 4857 scope.go:117] "RemoveContainer" containerID="6eddf6230131bf022b9b2d44f744bb2ba66ac614e3a73865e06874993d9b25a2" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.119361 4857 scope.go:117] "RemoveContainer" containerID="aef248f2872b50dd123e307693231889d888d754aa2606722b29923b214ea5ac" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.172712 4857 scope.go:117] "RemoveContainer" containerID="3d520738e8f28c1a015729d9cb42e4e5b3fc97ca82cdb471e0dc74d8a18e1ed2" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.315323 4857 scope.go:117] "RemoveContainer" containerID="808401756aad4b0a647939b364a7cefe29951cdd5fb2ecd75a2b76864d2014d6" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.362303 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.386363 4857 scope.go:117] "RemoveContainer" containerID="350af783b3d56bdab2e1a390c685ac1c3c3e5105287d40c8b40dce9d449ec1f1" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.450397 4857 scope.go:117] "RemoveContainer" containerID="5941f98344a30735f0ad088a35d4f8cd42468c17f5f86c4846e843649056a712" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.477571 4857 scope.go:117] "RemoveContainer" containerID="46026b1d619aa0af9234a73fd654abcb1c8aacb5d3b8d9552503983a86d7a042" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.512124 4857 scope.go:117] "RemoveContainer" containerID="9bdd662d75d86f11d7df2747d349545519e7dbeb059642b58d002b34c79f3f44" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.526669 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrgx\" (UniqueName: \"kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx\") pod \"8b83a605-c328-47c0-bada-aa7a6f12bfaf\" (UID: \"8b83a605-c328-47c0-bada-aa7a6f12bfaf\") " Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.534434 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx" (OuterVolumeSpecName: "kube-api-access-2vrgx") pod "8b83a605-c328-47c0-bada-aa7a6f12bfaf" (UID: "8b83a605-c328-47c0-bada-aa7a6f12bfaf"). InnerVolumeSpecName "kube-api-access-2vrgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.557066 4857 scope.go:117] "RemoveContainer" containerID="2d2524fa4901d7edb670f699c8b4d0504848364779bcc5f971fa80cd7332ba05" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.580200 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" event={"ID":"8b83a605-c328-47c0-bada-aa7a6f12bfaf","Type":"ContainerDied","Data":"91c832cd7b4c5bd9e46c999f43f0bd99dd356fb618e67c4026af7630d8202691"} Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.580264 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c832cd7b4c5bd9e46c999f43f0bd99dd356fb618e67c4026af7630d8202691" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.580394 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564076-xlvb5" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.610951 4857 scope.go:117] "RemoveContainer" containerID="e93dba244742908f989d91d186808237faaf628a8f4def33c61b31ea1525b128" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.623298 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564070-fjwnr"] Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.632842 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vrgx\" (UniqueName: \"kubernetes.io/projected/8b83a605-c328-47c0-bada-aa7a6f12bfaf-kube-api-access-2vrgx\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.642643 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564070-fjwnr"] Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.675251 4857 scope.go:117] "RemoveContainer" containerID="dda1b8b37c3c421ff3e0c2536377c26c77f08d76649a1d8a325e4e847d0f1763" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.698872 4857 scope.go:117] "RemoveContainer" containerID="48a200e6e484cdb5f74dac7ea160ebb3a82f5f2a2addf8dee193fc6c2f3d7ebd" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.744388 4857 scope.go:117] "RemoveContainer" containerID="f938c7ba217900403aaae4bef2fa16d3971dcaa20a53f6ecbd6cce1225c680a7" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.780136 4857 scope.go:117] "RemoveContainer" containerID="0a2b398ef4eab5f964b86591a567b3bc647ed0df55363c7d317741cd0114aecc" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.813167 4857 scope.go:117] "RemoveContainer" containerID="7915919b5e2c4256a4913cf9bc37216c45f1a3ddff357221a2d09b5c1fa1c37c" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.848740 4857 scope.go:117] "RemoveContainer" containerID="2ed308be836bd7991f890aa94f9af0da26f437f5b82d59f01acd49062cb12c2f" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.879744 4857 scope.go:117] "RemoveContainer" containerID="5cb7c97a7725417d60b3a1f16a92616e6352fe2341935a67abeb8b65ed3a0c9d" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.907209 4857 scope.go:117] "RemoveContainer" containerID="6d587580fb0096e6795bce2b9720b3097b84311499130470fc770f7887ce7f7c" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.942879 4857 scope.go:117] "RemoveContainer" containerID="4b72b30ab7e5a386c534851b7a2854588f5cfba416e11b4035413c27b369c3a0" Mar 18 14:36:08 crc kubenswrapper[4857]: I0318 14:36:08.973453 4857 scope.go:117] "RemoveContainer" containerID="eae50aaab562ae921e74e95fbfe9cd685e1a1337f5cdec8279e4de6f7759ccdb" Mar 18 14:36:09 crc kubenswrapper[4857]: I0318 14:36:09.002638 4857 scope.go:117] "RemoveContainer" containerID="bfefc711fdb8b00c14eb35c329385bcbf2fc37de6e0c746132826b7c0236c108" Mar 18 14:36:09 crc kubenswrapper[4857]: I0318 14:36:09.040208 4857 scope.go:117] "RemoveContainer" containerID="bed9b8d54107b7aad8ba44925a95ecb0c45f5be332d2c48fede93d6440e60bea" Mar 18 14:36:09 crc kubenswrapper[4857]: I0318 14:36:09.069400 4857 scope.go:117] "RemoveContainer" containerID="98f7202f69d620bf3aaade18d3ac96490d85c235823083bf22ab32bc0897ef45" Mar 18 14:36:09 crc kubenswrapper[4857]: I0318 14:36:09.182330 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8c4d1a-70c4-46f5-9a60-742ac9bfb730" path="/var/lib/kubelet/pods/3a8c4d1a-70c4-46f5-9a60-742ac9bfb730/volumes" Mar 18 14:36:11 crc kubenswrapper[4857]: I0318 14:36:11.233217 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Mar 18 14:36:11 crc kubenswrapper[4857]: I0318 14:36:11.300153 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:16 crc kubenswrapper[4857]: I0318 14:36:16.466191 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" containerID="cri-o://485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02" gracePeriod=604795 Mar 18 14:36:19 crc kubenswrapper[4857]: I0318 14:36:19.491057 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.134:5671: connect: connection refused" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.252775 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.341105 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.341308 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.341374 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.341448 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.342689 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxl5\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.343465 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.343504 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.343565 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.343673 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.346514 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.347055 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"a0ac0772-875b-4de1-8839-d7d4c90cffee\" (UID: \"a0ac0772-875b-4de1-8839-d7d4c90cffee\") " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.347335 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.351793 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.352166 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5" (OuterVolumeSpecName: "kube-api-access-rnxl5") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "kube-api-access-rnxl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.352726 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.372890 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.377888 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.380067 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info" (OuterVolumeSpecName: "pod-info") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381020 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381052 4857 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a0ac0772-875b-4de1-8839-d7d4c90cffee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381064 4857 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381073 4857 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a0ac0772-875b-4de1-8839-d7d4c90cffee-pod-info\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381082 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxl5\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-kube-api-access-rnxl5\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381090 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.381098 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.431022 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b" (OuterVolumeSpecName: "persistence") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.433820 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data" (OuterVolumeSpecName: "config-data") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.453151 4857 generic.go:334] "Generic (PLEG): container finished" podID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerID="485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02" exitCode=0 Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.453233 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerDied","Data":"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02"} Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.453278 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a0ac0772-875b-4de1-8839-d7d4c90cffee","Type":"ContainerDied","Data":"7eb24d41308be462f2cccfc680e27eebd3d72f7d4f58d47089a3728a7d5b712b"} Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.453305 4857 scope.go:117] "RemoveContainer" containerID="485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.453638 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.476081 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf" (OuterVolumeSpecName: "server-conf") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.484214 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") on node \"crc\" " Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.484265 4857 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-server-conf\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.484280 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a0ac0772-875b-4de1-8839-d7d4c90cffee-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.540484 4857 scope.go:117] "RemoveContainer" containerID="513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.558196 4857 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.558391 4857 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b") on node "crc" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.577078 4857 scope.go:117] "RemoveContainer" containerID="485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02" Mar 18 14:36:23 crc kubenswrapper[4857]: E0318 14:36:23.577912 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02\": container with ID starting with 485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02 not found: ID does not exist" containerID="485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.577969 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02"} err="failed to get container status \"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02\": rpc error: code = NotFound desc = could not find container \"485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02\": container with ID starting with 485d7f951e6ed7ed0038dc9c53920daeca5f99e1ce8a2e6e4de25ae607bfbd02 not found: ID does not exist" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.577996 4857 scope.go:117] "RemoveContainer" containerID="513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c" Mar 18 14:36:23 crc kubenswrapper[4857]: E0318 14:36:23.578445 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c\": container with ID starting with 513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c not found: ID does not exist" containerID="513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.578512 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c"} err="failed to get container status \"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c\": rpc error: code = NotFound desc = could not find container \"513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c\": container with ID starting with 513bd2ee079277c27429f25554a8e83ce402d5a971052e549e210280e7f4ef1c not found: ID does not exist" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.586652 4857 reconciler_common.go:293] "Volume detached for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.638266 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a0ac0772-875b-4de1-8839-d7d4c90cffee" (UID: "a0ac0772-875b-4de1-8839-d7d4c90cffee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.688700 4857 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a0ac0772-875b-4de1-8839-d7d4c90cffee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.820981 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.834857 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.860798 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:23 crc kubenswrapper[4857]: E0318 14:36:23.861481 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.861508 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" Mar 18 14:36:23 crc kubenswrapper[4857]: E0318 14:36:23.861533 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="setup-container" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.861540 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="setup-container" Mar 18 14:36:23 crc kubenswrapper[4857]: E0318 14:36:23.861572 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b83a605-c328-47c0-bada-aa7a6f12bfaf" containerName="oc" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.861579 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b83a605-c328-47c0-bada-aa7a6f12bfaf" containerName="oc" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.861861 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" containerName="rabbitmq" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.861887 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b83a605-c328-47c0-bada-aa7a6f12bfaf" containerName="oc" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.863592 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.880663 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893152 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893231 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893363 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-config-data\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893829 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/754a7e75-92a0-4b06-a81d-f00c6cf9957f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893892 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.893993 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.894027 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.894156 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.894358 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57h4\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-kube-api-access-v57h4\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.894431 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.894492 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/754a7e75-92a0-4b06-a81d-f00c6cf9957f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998148 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998327 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57h4\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-kube-api-access-v57h4\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998378 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998420 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/754a7e75-92a0-4b06-a81d-f00c6cf9957f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998502 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998556 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998595 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-config-data\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998769 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/754a7e75-92a0-4b06-a81d-f00c6cf9957f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998796 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998837 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.998864 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.999449 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:23 crc kubenswrapper[4857]: I0318 14:36:23.999729 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.000272 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-config-data\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.000606 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.002628 4857 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.002884 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9a8dff8f8a45c8a9f22e8eb98987a1b501748742ba6ae6bab69a4160bd3ccc1b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.003378 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/754a7e75-92a0-4b06-a81d-f00c6cf9957f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.004500 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.004918 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/754a7e75-92a0-4b06-a81d-f00c6cf9957f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.010624 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/754a7e75-92a0-4b06-a81d-f00c6cf9957f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.020952 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.024359 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57h4\" (UniqueName: \"kubernetes.io/projected/754a7e75-92a0-4b06-a81d-f00c6cf9957f-kube-api-access-v57h4\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.092693 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1b0c4a16-def2-4f8b-8b0d-1b5c966ebd1b\") pod \"rabbitmq-server-0\" (UID: \"754a7e75-92a0-4b06-a81d-f00c6cf9957f\") " pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.187202 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 14:36:24 crc kubenswrapper[4857]: I0318 14:36:24.816314 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 14:36:25 crc kubenswrapper[4857]: I0318 14:36:25.178523 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ac0772-875b-4de1-8839-d7d4c90cffee" path="/var/lib/kubelet/pods/a0ac0772-875b-4de1-8839-d7d4c90cffee/volumes" Mar 18 14:36:25 crc kubenswrapper[4857]: I0318 14:36:25.480265 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"754a7e75-92a0-4b06-a81d-f00c6cf9957f","Type":"ContainerStarted","Data":"fb559a4521ee5433a2997c9386960f4fd7a1641e56e6b957b5831301a11153d7"} Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.039040 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.040408 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.040543 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.041883 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.042073 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc" gracePeriod=600 Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.512640 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc" exitCode=0 Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.512725 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc"} Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.513067 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9"} Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.513102 4857 scope.go:117] "RemoveContainer" containerID="6115c7425a9fefd0f76f56309e369039b5e1eb14f471d31ff08d5b2ec6d920c9" Mar 18 14:36:27 crc kubenswrapper[4857]: I0318 14:36:27.516674 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"754a7e75-92a0-4b06-a81d-f00c6cf9957f","Type":"ContainerStarted","Data":"c90d457408a440e073fcec7af6c2fc46e0bc2016b21d3bbac762c19aadaaff42"} Mar 18 14:36:49 crc kubenswrapper[4857]: I0318 14:36:49.064690 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sbw4r"] Mar 18 14:36:49 crc kubenswrapper[4857]: I0318 14:36:49.080570 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sbw4r"] Mar 18 14:36:49 crc kubenswrapper[4857]: I0318 14:36:49.176717 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea129bc-8d82-472e-8c4d-0f1b5e79078e" path="/var/lib/kubelet/pods/2ea129bc-8d82-472e-8c4d-0f1b5e79078e/volumes" Mar 18 14:36:59 crc kubenswrapper[4857]: I0318 14:36:59.509249 4857 generic.go:334] "Generic (PLEG): container finished" podID="754a7e75-92a0-4b06-a81d-f00c6cf9957f" containerID="c90d457408a440e073fcec7af6c2fc46e0bc2016b21d3bbac762c19aadaaff42" exitCode=0 Mar 18 14:36:59 crc kubenswrapper[4857]: I0318 14:36:59.509375 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"754a7e75-92a0-4b06-a81d-f00c6cf9957f","Type":"ContainerDied","Data":"c90d457408a440e073fcec7af6c2fc46e0bc2016b21d3bbac762c19aadaaff42"} Mar 18 14:37:00 crc kubenswrapper[4857]: I0318 14:37:00.524865 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"754a7e75-92a0-4b06-a81d-f00c6cf9957f","Type":"ContainerStarted","Data":"277574c044a80d119ab01d2bb02b07935ebce5b141b225f1e7a620374ce0fb43"} Mar 18 14:37:00 crc kubenswrapper[4857]: I0318 14:37:00.525972 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 18 14:37:00 crc kubenswrapper[4857]: I0318 14:37:00.559215 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.559189506 podStartE2EDuration="37.559189506s" podCreationTimestamp="2026-03-18 14:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:37:00.555133054 +0000 UTC m=+2204.684261521" watchObservedRunningTime="2026-03-18 14:37:00.559189506 +0000 UTC m=+2204.688317963" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.012605 4857 scope.go:117] "RemoveContainer" containerID="6fc05dfc3b2dcd496f5146a3392c9717fa78b490ac763824baaff9c85a6de47a" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.056337 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tpllm"] Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.076577 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tpllm"] Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.081018 4857 scope.go:117] "RemoveContainer" containerID="b34e957dfaf8a14a81c6916341d900fdf555b69af4861ae62a71754e5b132d09" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.146892 4857 scope.go:117] "RemoveContainer" containerID="7a8bcdcf54262706908cba206ed52a032a504d1886a28f67bc1bc5fb9b17aba5" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.178277 4857 scope.go:117] "RemoveContainer" containerID="d8c431dea970535feb0393cdff88aee5068915fb73c41137ce7dac0fc68e3554" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.235112 4857 scope.go:117] "RemoveContainer" containerID="b898d5eded8b98c3ffe0268f020718cd96fdf89e63f2b8e91f9fc5b3a349e0f5" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.257718 4857 scope.go:117] "RemoveContainer" containerID="1de61d1d1510ea30bfd9a0d8584be87e1bdb5fbcdd0f3b85c8a2c2c73a6542a8" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.311444 4857 scope.go:117] "RemoveContainer" containerID="2d8b8ef2bdde317d1a167232b48e49f0b61d9d25dc04821e2785d889bc9546f1" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.353808 4857 scope.go:117] "RemoveContainer" containerID="c9246e81233a550573bc4ba1256d7c08bb110f0c6ee7e0823a74fb4e43ad623f" Mar 18 14:37:10 crc kubenswrapper[4857]: I0318 14:37:10.387156 4857 scope.go:117] "RemoveContainer" containerID="c7e5c9a676d0f25057307499b76119a8b8c577f909839ceafb63b39524d31878" Mar 18 14:37:11 crc kubenswrapper[4857]: I0318 14:37:11.199110 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5181712d-25da-484b-9eb5-3fc9230bab14" path="/var/lib/kubelet/pods/5181712d-25da-484b-9eb5-3fc9230bab14/volumes" Mar 18 14:37:14 crc kubenswrapper[4857]: I0318 14:37:14.193123 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 18 14:37:31 crc kubenswrapper[4857]: I0318 14:37:31.970844 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:37:31 crc kubenswrapper[4857]: I0318 14:37:31.982120 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:31 crc kubenswrapper[4857]: I0318 14:37:31.991834 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.081433 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.082728 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.082902 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnx2\" (UniqueName: \"kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.185717 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.185814 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnx2\" (UniqueName: \"kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.185878 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.186331 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.186494 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.214033 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnx2\" (UniqueName: \"kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2\") pod \"redhat-operators-hpjtn\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:32 crc kubenswrapper[4857]: I0318 14:37:32.562797 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.071117 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cxdpg"] Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.086669 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-92hzs"] Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.102903 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cxdpg"] Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.120291 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-92hzs"] Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.158444 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.228777 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c5e747-f831-4a2d-a73f-a26848b5c2a6" path="/var/lib/kubelet/pods/03c5e747-f831-4a2d-a73f-a26848b5c2a6/volumes" Mar 18 14:37:33 crc kubenswrapper[4857]: I0318 14:37:33.255031 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4268c3-7d11-484c-8718-736b4fd44de6" path="/var/lib/kubelet/pods/9b4268c3-7d11-484c-8718-736b4fd44de6/volumes" Mar 18 14:37:34 crc kubenswrapper[4857]: I0318 14:37:34.009043 4857 generic.go:334] "Generic (PLEG): container finished" podID="3a120c68-b5dc-4660-8be2-be911aa56803" containerID="e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402" exitCode=0 Mar 18 14:37:34 crc kubenswrapper[4857]: I0318 14:37:34.009283 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerDied","Data":"e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402"} Mar 18 14:37:34 crc kubenswrapper[4857]: I0318 14:37:34.009314 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerStarted","Data":"0e4e4662e98dacf8ab853b5ee7c36f28667d558ac4c53c04fd755776a01bc045"} Mar 18 14:37:38 crc kubenswrapper[4857]: I0318 14:37:38.328507 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerStarted","Data":"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e"} Mar 18 14:37:44 crc kubenswrapper[4857]: I0318 14:37:44.043924 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nmg7v"] Mar 18 14:37:44 crc kubenswrapper[4857]: I0318 14:37:44.054136 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nmg7v"] Mar 18 14:37:45 crc kubenswrapper[4857]: I0318 14:37:45.593359 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6791c442-3e89-4211-b980-e00afa59d6c1" path="/var/lib/kubelet/pods/6791c442-3e89-4211-b980-e00afa59d6c1/volumes" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.175602 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564078-x6f6t"] Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.178564 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.181398 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.182967 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.184133 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.196855 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564078-x6f6t"] Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.276731 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4vr\" (UniqueName: \"kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr\") pod \"auto-csr-approver-29564078-x6f6t\" (UID: \"a141f85e-43a2-4026-84b0-7d24012494f7\") " pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.666534 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4vr\" (UniqueName: \"kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr\") pod \"auto-csr-approver-29564078-x6f6t\" (UID: \"a141f85e-43a2-4026-84b0-7d24012494f7\") " pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.725510 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4vr\" (UniqueName: \"kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr\") pod \"auto-csr-approver-29564078-x6f6t\" (UID: \"a141f85e-43a2-4026-84b0-7d24012494f7\") " pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:00 crc kubenswrapper[4857]: I0318 14:38:00.811300 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:01 crc kubenswrapper[4857]: I0318 14:38:01.425094 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564078-x6f6t"] Mar 18 14:38:01 crc kubenswrapper[4857]: W0318 14:38:01.432248 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda141f85e_43a2_4026_84b0_7d24012494f7.slice/crio-c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f WatchSource:0}: Error finding container c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f: Status 404 returned error can't find the container with id c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f Mar 18 14:38:01 crc kubenswrapper[4857]: I0318 14:38:01.962937 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" event={"ID":"a141f85e-43a2-4026-84b0-7d24012494f7","Type":"ContainerStarted","Data":"c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f"} Mar 18 14:38:01 crc kubenswrapper[4857]: I0318 14:38:01.969174 4857 generic.go:334] "Generic (PLEG): container finished" podID="3a120c68-b5dc-4660-8be2-be911aa56803" containerID="fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e" exitCode=0 Mar 18 14:38:01 crc kubenswrapper[4857]: I0318 14:38:01.969241 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerDied","Data":"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e"} Mar 18 14:38:03 crc kubenswrapper[4857]: I0318 14:38:03.385711 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerStarted","Data":"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3"} Mar 18 14:38:03 crc kubenswrapper[4857]: I0318 14:38:03.422775 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpjtn" podStartSLOduration=5.29386233 podStartE2EDuration="32.422729785s" podCreationTimestamp="2026-03-18 14:37:31 +0000 UTC" firstStartedPulling="2026-03-18 14:37:35.293244121 +0000 UTC m=+2239.422372578" lastFinishedPulling="2026-03-18 14:38:02.422111556 +0000 UTC m=+2266.551240033" observedRunningTime="2026-03-18 14:38:03.416794376 +0000 UTC m=+2267.545922833" watchObservedRunningTime="2026-03-18 14:38:03.422729785 +0000 UTC m=+2267.551858242" Mar 18 14:38:04 crc kubenswrapper[4857]: I0318 14:38:04.398183 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" event={"ID":"a141f85e-43a2-4026-84b0-7d24012494f7","Type":"ContainerStarted","Data":"e56ed5a3b37ce824ca741373a9946d86c6f81ccdee356641c9c70adcc59a293f"} Mar 18 14:38:04 crc kubenswrapper[4857]: I0318 14:38:04.416689 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" podStartSLOduration=2.927793724 podStartE2EDuration="4.416669125s" podCreationTimestamp="2026-03-18 14:38:00 +0000 UTC" firstStartedPulling="2026-03-18 14:38:01.43711566 +0000 UTC m=+2265.566244117" lastFinishedPulling="2026-03-18 14:38:02.925991051 +0000 UTC m=+2267.055119518" observedRunningTime="2026-03-18 14:38:04.41209401 +0000 UTC m=+2268.541222467" watchObservedRunningTime="2026-03-18 14:38:04.416669125 +0000 UTC m=+2268.545797582" Mar 18 14:38:05 crc kubenswrapper[4857]: I0318 14:38:05.411015 4857 generic.go:334] "Generic (PLEG): container finished" podID="a141f85e-43a2-4026-84b0-7d24012494f7" containerID="e56ed5a3b37ce824ca741373a9946d86c6f81ccdee356641c9c70adcc59a293f" exitCode=0 Mar 18 14:38:05 crc kubenswrapper[4857]: I0318 14:38:05.411414 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" event={"ID":"a141f85e-43a2-4026-84b0-7d24012494f7","Type":"ContainerDied","Data":"e56ed5a3b37ce824ca741373a9946d86c6f81ccdee356641c9c70adcc59a293f"} Mar 18 14:38:06 crc kubenswrapper[4857]: I0318 14:38:06.895227 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.085037 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4vr\" (UniqueName: \"kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr\") pod \"a141f85e-43a2-4026-84b0-7d24012494f7\" (UID: \"a141f85e-43a2-4026-84b0-7d24012494f7\") " Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.093458 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr" (OuterVolumeSpecName: "kube-api-access-8n4vr") pod "a141f85e-43a2-4026-84b0-7d24012494f7" (UID: "a141f85e-43a2-4026-84b0-7d24012494f7"). InnerVolumeSpecName "kube-api-access-8n4vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.189446 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4vr\" (UniqueName: \"kubernetes.io/projected/a141f85e-43a2-4026-84b0-7d24012494f7-kube-api-access-8n4vr\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.436661 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" event={"ID":"a141f85e-43a2-4026-84b0-7d24012494f7","Type":"ContainerDied","Data":"c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f"} Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.436898 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39f9ce44bc9cf09110de781f95aad82b71a21d16fe289956add5dcc5d5b862f" Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.436766 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564078-x6f6t" Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.507150 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564072-rcj95"] Mar 18 14:38:07 crc kubenswrapper[4857]: I0318 14:38:07.518999 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564072-rcj95"] Mar 18 14:38:09 crc kubenswrapper[4857]: I0318 14:38:09.177707 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472573ed-cb00-48c2-b290-adb0a4f69739" path="/var/lib/kubelet/pods/472573ed-cb00-48c2-b290-adb0a4f69739/volumes" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.632678 4857 scope.go:117] "RemoveContainer" containerID="ac303c9ce4edd411fc80758bae6e07e3f1f9d86bb886768228e72a451c481388" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.681869 4857 scope.go:117] "RemoveContainer" containerID="a371c313f22f4bd6912388cf881565ebdec5cd45e8d6f0d8aa0a5352d293572d" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.749469 4857 scope.go:117] "RemoveContainer" containerID="08578350050cb0dd78a45cc22785fa4c14c09a3262907a84df685906699b2f16" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.823166 4857 scope.go:117] "RemoveContainer" containerID="f50dc8cb888eab1560efbc5460bc54cc88218bf7266de0c42e2c0a80fc60017c" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.879150 4857 scope.go:117] "RemoveContainer" containerID="bba88419f71654d0b2e3a3ae1185e9d1075e47096aa26c8774ad7975ad4234f0" Mar 18 14:38:10 crc kubenswrapper[4857]: I0318 14:38:10.903802 4857 scope.go:117] "RemoveContainer" containerID="e064293d27d95c5430bf540f491605545fdcc216699b5ead83cb106b78936929" Mar 18 14:38:11 crc kubenswrapper[4857]: I0318 14:38:11.445000 4857 scope.go:117] "RemoveContainer" containerID="28bd5e7931ce1023aacc699a882a15a50bc9eda6a1474106f1bf9c4663cd21b7" Mar 18 14:38:12 crc kubenswrapper[4857]: I0318 14:38:12.778219 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:12 crc kubenswrapper[4857]: I0318 14:38:12.778674 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:13 crc kubenswrapper[4857]: I0318 14:38:13.876718 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpjtn" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" probeResult="failure" output=< Mar 18 14:38:13 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:38:13 crc kubenswrapper[4857]: > Mar 18 14:38:23 crc kubenswrapper[4857]: I0318 14:38:23.640080 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpjtn" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" probeResult="failure" output=< Mar 18 14:38:23 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:38:23 crc kubenswrapper[4857]: > Mar 18 14:38:27 crc kubenswrapper[4857]: I0318 14:38:27.162586 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:38:27 crc kubenswrapper[4857]: I0318 14:38:27.167024 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:38:33 crc kubenswrapper[4857]: I0318 14:38:33.764657 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpjtn" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" probeResult="failure" output=< Mar 18 14:38:33 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:38:33 crc kubenswrapper[4857]: > Mar 18 14:38:43 crc kubenswrapper[4857]: I0318 14:38:43.616678 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpjtn" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" probeResult="failure" output=< Mar 18 14:38:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:38:43 crc kubenswrapper[4857]: > Mar 18 14:38:47 crc kubenswrapper[4857]: I0318 14:38:47.090809 4857 generic.go:334] "Generic (PLEG): container finished" podID="f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" containerID="b27153d8500db813297369aa1149607c26b10d0e1f4bef5b98cc0cc172841489" exitCode=0 Mar 18 14:38:47 crc kubenswrapper[4857]: I0318 14:38:47.090907 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" event={"ID":"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941","Type":"ContainerDied","Data":"b27153d8500db813297369aa1149607c26b10d0e1f4bef5b98cc0cc172841489"} Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.320260 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.454290 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z268f\" (UniqueName: \"kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f\") pod \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.454828 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory\") pod \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.454878 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle\") pod \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.455021 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam\") pod \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\" (UID: \"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941\") " Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.462139 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" (UID: "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.462534 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f" (OuterVolumeSpecName: "kube-api-access-z268f") pod "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" (UID: "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941"). InnerVolumeSpecName "kube-api-access-z268f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.495246 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory" (OuterVolumeSpecName: "inventory") pod "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" (UID: "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.523943 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" (UID: "f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.558194 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z268f\" (UniqueName: \"kubernetes.io/projected/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-kube-api-access-z268f\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.558243 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.558262 4857 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.558276 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.701377 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" event={"ID":"f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941","Type":"ContainerDied","Data":"ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5"} Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.701456 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae140d403bdb5b992bc91f63b125f6c6e5b818fa4d1f17a0a42513d6289f5ba5" Mar 18 14:38:50 crc kubenswrapper[4857]: I0318 14:38:50.701539 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.710614 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss"] Mar 18 14:38:51 crc kubenswrapper[4857]: E0318 14:38:51.715117 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a141f85e-43a2-4026-84b0-7d24012494f7" containerName="oc" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.715155 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a141f85e-43a2-4026-84b0-7d24012494f7" containerName="oc" Mar 18 14:38:51 crc kubenswrapper[4857]: E0318 14:38:51.715198 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.715207 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.716648 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.716695 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a141f85e-43a2-4026-84b0-7d24012494f7" containerName="oc" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.718318 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.722709 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.727869 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.728078 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.730117 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.743437 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss"] Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.908744 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.908986 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:51 crc kubenswrapper[4857]: I0318 14:38:51.909330 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8dx\" (UniqueName: \"kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.013432 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.013964 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8dx\" (UniqueName: \"kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.014976 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.029515 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.030622 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.036985 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8dx\" (UniqueName: \"kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vlpss\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.090021 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.846056 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.912957 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:52 crc kubenswrapper[4857]: I0318 14:38:52.931356 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss"] Mar 18 14:38:53 crc kubenswrapper[4857]: I0318 14:38:53.095558 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:38:53 crc kubenswrapper[4857]: I0318 14:38:53.804184 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" event={"ID":"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d","Type":"ContainerStarted","Data":"e4934155b4c4c38253139dc7fbe4266c218f995d5fef6def3d479339be940dec"} Mar 18 14:38:54 crc kubenswrapper[4857]: I0318 14:38:54.821597 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpjtn" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" containerID="cri-o://a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3" gracePeriod=2 Mar 18 14:38:54 crc kubenswrapper[4857]: I0318 14:38:54.822301 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" event={"ID":"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d","Type":"ContainerStarted","Data":"81873b4e67f96aca10f841f0c9d48a91afdbca8b45e00be6195d28782bd703df"} Mar 18 14:38:54 crc kubenswrapper[4857]: I0318 14:38:54.857257 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" podStartSLOduration=3.399194318 podStartE2EDuration="3.857209696s" podCreationTimestamp="2026-03-18 14:38:51 +0000 UTC" firstStartedPulling="2026-03-18 14:38:52.936902769 +0000 UTC m=+2317.066031236" lastFinishedPulling="2026-03-18 14:38:53.394918147 +0000 UTC m=+2317.524046614" observedRunningTime="2026-03-18 14:38:54.854304533 +0000 UTC m=+2318.983432990" watchObservedRunningTime="2026-03-18 14:38:54.857209696 +0000 UTC m=+2318.986338153" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.542876 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.629264 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities\") pod \"3a120c68-b5dc-4660-8be2-be911aa56803\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.629438 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content\") pod \"3a120c68-b5dc-4660-8be2-be911aa56803\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.629467 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhnx2\" (UniqueName: \"kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2\") pod \"3a120c68-b5dc-4660-8be2-be911aa56803\" (UID: \"3a120c68-b5dc-4660-8be2-be911aa56803\") " Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.630613 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities" (OuterVolumeSpecName: "utilities") pod "3a120c68-b5dc-4660-8be2-be911aa56803" (UID: "3a120c68-b5dc-4660-8be2-be911aa56803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.637126 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2" (OuterVolumeSpecName: "kube-api-access-vhnx2") pod "3a120c68-b5dc-4660-8be2-be911aa56803" (UID: "3a120c68-b5dc-4660-8be2-be911aa56803"). InnerVolumeSpecName "kube-api-access-vhnx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.732818 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.732861 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhnx2\" (UniqueName: \"kubernetes.io/projected/3a120c68-b5dc-4660-8be2-be911aa56803-kube-api-access-vhnx2\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.828051 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a120c68-b5dc-4660-8be2-be911aa56803" (UID: "3a120c68-b5dc-4660-8be2-be911aa56803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.835009 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a120c68-b5dc-4660-8be2-be911aa56803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.848289 4857 generic.go:334] "Generic (PLEG): container finished" podID="3a120c68-b5dc-4660-8be2-be911aa56803" containerID="a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3" exitCode=0 Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.848369 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerDied","Data":"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3"} Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.848471 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpjtn" event={"ID":"3a120c68-b5dc-4660-8be2-be911aa56803","Type":"ContainerDied","Data":"0e4e4662e98dacf8ab853b5ee7c36f28667d558ac4c53c04fd755776a01bc045"} Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.848508 4857 scope.go:117] "RemoveContainer" containerID="a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.848913 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpjtn" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.885977 4857 scope.go:117] "RemoveContainer" containerID="fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.910863 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.920866 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpjtn"] Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.957322 4857 scope.go:117] "RemoveContainer" containerID="e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.998118 4857 scope.go:117] "RemoveContainer" containerID="a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3" Mar 18 14:38:55 crc kubenswrapper[4857]: E0318 14:38:55.998879 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3\": container with ID starting with a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3 not found: ID does not exist" containerID="a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.998938 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3"} err="failed to get container status \"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3\": rpc error: code = NotFound desc = could not find container \"a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3\": container with ID starting with a1d8a3301206f32a1010cf89f2b7b5e1832f291c396bf7253c02cc109ed4ecf3 not found: ID does not exist" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.998972 4857 scope.go:117] "RemoveContainer" containerID="fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e" Mar 18 14:38:55 crc kubenswrapper[4857]: E0318 14:38:55.999642 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e\": container with ID starting with fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e not found: ID does not exist" containerID="fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.999685 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e"} err="failed to get container status \"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e\": rpc error: code = NotFound desc = could not find container \"fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e\": container with ID starting with fc001cdd560ee6b15209806416dccfbeeab5e9a8b0a620047552a1b180fce60e not found: ID does not exist" Mar 18 14:38:55 crc kubenswrapper[4857]: I0318 14:38:55.999720 4857 scope.go:117] "RemoveContainer" containerID="e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402" Mar 18 14:38:56 crc kubenswrapper[4857]: E0318 14:38:56.000139 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402\": container with ID starting with e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402 not found: ID does not exist" containerID="e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402" Mar 18 14:38:56 crc kubenswrapper[4857]: I0318 14:38:56.000182 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402"} err="failed to get container status \"e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402\": rpc error: code = NotFound desc = could not find container \"e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402\": container with ID starting with e077352c9724dda94939e277cdecd0bf9ade5073c8f70e28023fe96d61808402 not found: ID does not exist" Mar 18 14:38:57 crc kubenswrapper[4857]: I0318 14:38:57.039265 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:38:57 crc kubenswrapper[4857]: I0318 14:38:57.039694 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:38:57 crc kubenswrapper[4857]: I0318 14:38:57.179725 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" path="/var/lib/kubelet/pods/3a120c68-b5dc-4660-8be2-be911aa56803/volumes" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.977547 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:08 crc kubenswrapper[4857]: E0318 14:39:08.978806 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.978824 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" Mar 18 14:39:08 crc kubenswrapper[4857]: E0318 14:39:08.978845 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="extract-content" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.978851 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="extract-content" Mar 18 14:39:08 crc kubenswrapper[4857]: E0318 14:39:08.978867 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="extract-utilities" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.978874 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="extract-utilities" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.979205 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a120c68-b5dc-4660-8be2-be911aa56803" containerName="registry-server" Mar 18 14:39:08 crc kubenswrapper[4857]: I0318 14:39:08.981634 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.016272 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.059659 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.059903 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.060404 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.162361 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.162503 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.162539 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.163012 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.163834 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.189708 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s\") pod \"community-operators-kcmld\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.315599 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:09 crc kubenswrapper[4857]: I0318 14:39:09.885113 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:09 crc kubenswrapper[4857]: W0318 14:39:09.887326 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe738c12_79b4_409b_b228_7a289c74857b.slice/crio-3662b85b5d9b1a9c19ee034741a1fed5670cfcaed30fafecd3485c2d53987950 WatchSource:0}: Error finding container 3662b85b5d9b1a9c19ee034741a1fed5670cfcaed30fafecd3485c2d53987950: Status 404 returned error can't find the container with id 3662b85b5d9b1a9c19ee034741a1fed5670cfcaed30fafecd3485c2d53987950 Mar 18 14:39:10 crc kubenswrapper[4857]: I0318 14:39:10.077904 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerStarted","Data":"3662b85b5d9b1a9c19ee034741a1fed5670cfcaed30fafecd3485c2d53987950"} Mar 18 14:39:11 crc kubenswrapper[4857]: I0318 14:39:11.095329 4857 generic.go:334] "Generic (PLEG): container finished" podID="be738c12-79b4-409b-b228-7a289c74857b" containerID="e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166" exitCode=0 Mar 18 14:39:11 crc kubenswrapper[4857]: I0318 14:39:11.095444 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerDied","Data":"e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166"} Mar 18 14:39:13 crc kubenswrapper[4857]: I0318 14:39:13.412156 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerStarted","Data":"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a"} Mar 18 14:39:15 crc kubenswrapper[4857]: I0318 14:39:15.409131 4857 generic.go:334] "Generic (PLEG): container finished" podID="be738c12-79b4-409b-b228-7a289c74857b" containerID="9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a" exitCode=0 Mar 18 14:39:15 crc kubenswrapper[4857]: I0318 14:39:15.409695 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerDied","Data":"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a"} Mar 18 14:39:17 crc kubenswrapper[4857]: I0318 14:39:17.479057 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerStarted","Data":"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336"} Mar 18 14:39:17 crc kubenswrapper[4857]: I0318 14:39:17.520711 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kcmld" podStartSLOduration=4.35914981 podStartE2EDuration="9.520685628s" podCreationTimestamp="2026-03-18 14:39:08 +0000 UTC" firstStartedPulling="2026-03-18 14:39:11.098352744 +0000 UTC m=+2335.227481201" lastFinishedPulling="2026-03-18 14:39:16.259888542 +0000 UTC m=+2340.389017019" observedRunningTime="2026-03-18 14:39:17.516092872 +0000 UTC m=+2341.645221339" watchObservedRunningTime="2026-03-18 14:39:17.520685628 +0000 UTC m=+2341.649814085" Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.070311 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z52bw"] Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.097085 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z52bw"] Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.189228 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25041d81-4986-400d-b6bc-eab23db1550f" path="/var/lib/kubelet/pods/25041d81-4986-400d-b6bc-eab23db1550f/volumes" Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.316909 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.316983 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:19 crc kubenswrapper[4857]: I0318 14:39:19.378055 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.039078 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.039720 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.039851 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.041949 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.042167 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" gracePeriod=600 Mar 18 14:39:27 crc kubenswrapper[4857]: E0318 14:39:27.197129 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.596933 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" exitCode=0 Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.596996 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9"} Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.597042 4857 scope.go:117] "RemoveContainer" containerID="5cff5a0fcb20bc0a2f581e38cad748a1bb0fa947f1db275563f0d0a6f3be78bc" Mar 18 14:39:27 crc kubenswrapper[4857]: I0318 14:39:27.598329 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:39:27 crc kubenswrapper[4857]: E0318 14:39:27.598961 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:39:29 crc kubenswrapper[4857]: I0318 14:39:29.391624 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:29 crc kubenswrapper[4857]: I0318 14:39:29.480414 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:29 crc kubenswrapper[4857]: I0318 14:39:29.620448 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kcmld" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="registry-server" containerID="cri-o://25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336" gracePeriod=2 Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.127804 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.242765 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s\") pod \"be738c12-79b4-409b-b228-7a289c74857b\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.242982 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities\") pod \"be738c12-79b4-409b-b228-7a289c74857b\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.243007 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content\") pod \"be738c12-79b4-409b-b228-7a289c74857b\" (UID: \"be738c12-79b4-409b-b228-7a289c74857b\") " Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.244005 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities" (OuterVolumeSpecName: "utilities") pod "be738c12-79b4-409b-b228-7a289c74857b" (UID: "be738c12-79b4-409b-b228-7a289c74857b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.248623 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s" (OuterVolumeSpecName: "kube-api-access-fsj5s") pod "be738c12-79b4-409b-b228-7a289c74857b" (UID: "be738c12-79b4-409b-b228-7a289c74857b"). InnerVolumeSpecName "kube-api-access-fsj5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.305832 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be738c12-79b4-409b-b228-7a289c74857b" (UID: "be738c12-79b4-409b-b228-7a289c74857b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.346114 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsj5s\" (UniqueName: \"kubernetes.io/projected/be738c12-79b4-409b-b228-7a289c74857b-kube-api-access-fsj5s\") on node \"crc\" DevicePath \"\"" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.346156 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.346169 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be738c12-79b4-409b-b228-7a289c74857b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.631801 4857 generic.go:334] "Generic (PLEG): container finished" podID="be738c12-79b4-409b-b228-7a289c74857b" containerID="25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336" exitCode=0 Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.631858 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kcmld" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.631874 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerDied","Data":"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336"} Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.633133 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kcmld" event={"ID":"be738c12-79b4-409b-b228-7a289c74857b","Type":"ContainerDied","Data":"3662b85b5d9b1a9c19ee034741a1fed5670cfcaed30fafecd3485c2d53987950"} Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.633155 4857 scope.go:117] "RemoveContainer" containerID="25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.662935 4857 scope.go:117] "RemoveContainer" containerID="9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.676911 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.687873 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kcmld"] Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.704701 4857 scope.go:117] "RemoveContainer" containerID="e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.757686 4857 scope.go:117] "RemoveContainer" containerID="25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336" Mar 18 14:39:30 crc kubenswrapper[4857]: E0318 14:39:30.758208 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336\": container with ID starting with 25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336 not found: ID does not exist" containerID="25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.758252 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336"} err="failed to get container status \"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336\": rpc error: code = NotFound desc = could not find container \"25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336\": container with ID starting with 25ec96e555d1c6b3201fe5df577c7a05c6ddbd5566990145ad108a967c86c336 not found: ID does not exist" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.758279 4857 scope.go:117] "RemoveContainer" containerID="9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a" Mar 18 14:39:30 crc kubenswrapper[4857]: E0318 14:39:30.758606 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a\": container with ID starting with 9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a not found: ID does not exist" containerID="9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.758681 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a"} err="failed to get container status \"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a\": rpc error: code = NotFound desc = could not find container \"9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a\": container with ID starting with 9d68d6cb880416267ec6c628dd323292e5f6d7e45932c66dab1cb8bfa17e136a not found: ID does not exist" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.758786 4857 scope.go:117] "RemoveContainer" containerID="e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166" Mar 18 14:39:30 crc kubenswrapper[4857]: E0318 14:39:30.759155 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166\": container with ID starting with e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166 not found: ID does not exist" containerID="e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166" Mar 18 14:39:30 crc kubenswrapper[4857]: I0318 14:39:30.759186 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166"} err="failed to get container status \"e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166\": rpc error: code = NotFound desc = could not find container \"e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166\": container with ID starting with e448d511567499a3d53d481bfa4ab11c6b5b8bbbcc3db7fb66d5ada85e329166 not found: ID does not exist" Mar 18 14:39:31 crc kubenswrapper[4857]: I0318 14:39:31.193730 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be738c12-79b4-409b-b228-7a289c74857b" path="/var/lib/kubelet/pods/be738c12-79b4-409b-b228-7a289c74857b/volumes" Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.074090 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dt4sv"] Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.088099 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2bj5j"] Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.101171 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cdc0-account-create-update-9lftd"] Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.112455 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dt4sv"] Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.122468 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cdc0-account-create-update-9lftd"] Mar 18 14:39:36 crc kubenswrapper[4857]: I0318 14:39:36.134549 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2bj5j"] Mar 18 14:39:37 crc kubenswrapper[4857]: I0318 14:39:37.186574 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414737ac-b39d-4b54-bd95-2c8448fd22dc" path="/var/lib/kubelet/pods/414737ac-b39d-4b54-bd95-2c8448fd22dc/volumes" Mar 18 14:39:37 crc kubenswrapper[4857]: I0318 14:39:37.187633 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ad3374-1103-4d14-a250-0efcbc82abf8" path="/var/lib/kubelet/pods/51ad3374-1103-4d14-a250-0efcbc82abf8/volumes" Mar 18 14:39:37 crc kubenswrapper[4857]: I0318 14:39:37.189992 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09c7b6e-3108-4aab-8597-20e2f835cb63" path="/var/lib/kubelet/pods/c09c7b6e-3108-4aab-8597-20e2f835cb63/volumes" Mar 18 14:39:38 crc kubenswrapper[4857]: I0318 14:39:38.166590 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:39:38 crc kubenswrapper[4857]: E0318 14:39:38.168421 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.051673 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dd68-account-create-update-mlwzf"] Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.064770 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-056e-account-create-update-lsd7h"] Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.075590 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dd68-account-create-update-mlwzf"] Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.085471 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-056e-account-create-update-lsd7h"] Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.179141 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c281c9a-b573-4e11-acfa-f15205eb5f58" path="/var/lib/kubelet/pods/4c281c9a-b573-4e11-acfa-f15205eb5f58/volumes" Mar 18 14:39:39 crc kubenswrapper[4857]: I0318 14:39:39.183545 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c1ef3d-cd3d-405d-a484-af6052f4a291" path="/var/lib/kubelet/pods/e4c1ef3d-cd3d-405d-a484-af6052f4a291/volumes" Mar 18 14:39:52 crc kubenswrapper[4857]: I0318 14:39:52.165398 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:39:52 crc kubenswrapper[4857]: E0318 14:39:52.166279 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.571082 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:39:53 crc kubenswrapper[4857]: E0318 14:39:53.573345 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="extract-utilities" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.573396 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="extract-utilities" Mar 18 14:39:53 crc kubenswrapper[4857]: E0318 14:39:53.573449 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="registry-server" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.573460 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="registry-server" Mar 18 14:39:53 crc kubenswrapper[4857]: E0318 14:39:53.573491 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="extract-content" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.573503 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="extract-content" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.574180 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="be738c12-79b4-409b-b228-7a289c74857b" containerName="registry-server" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.579338 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.600532 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.710093 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.710175 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.710236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hcp5\" (UniqueName: \"kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.812524 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.812578 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.812616 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hcp5\" (UniqueName: \"kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.813364 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.813948 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.836584 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hcp5\" (UniqueName: \"kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5\") pod \"redhat-marketplace-t8jl9\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:53 crc kubenswrapper[4857]: I0318 14:39:53.910555 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.579291 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.590616 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.591274 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.664338 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.664557 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.665171 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwgdd\" (UniqueName: \"kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.768438 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.768623 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.768796 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwgdd\" (UniqueName: \"kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.769305 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.769488 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.799661 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwgdd\" (UniqueName: \"kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd\") pod \"certified-operators-swg7f\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:54.923956 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:39:55 crc kubenswrapper[4857]: I0318 14:39:55.892545 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:39:56 crc kubenswrapper[4857]: I0318 14:39:56.020274 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:39:56 crc kubenswrapper[4857]: I0318 14:39:56.023269 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerStarted","Data":"ce84f892e2cd4cdccc54a6885c0963ebefcc4633066f235c1ddbb3ada7450298"} Mar 18 14:39:56 crc kubenswrapper[4857]: W0318 14:39:56.035286 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb88ffd_c3f6_42d1_ab39_92cf64ebd217.slice/crio-67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974 WatchSource:0}: Error finding container 67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974: Status 404 returned error can't find the container with id 67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974 Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.046167 4857 generic.go:334] "Generic (PLEG): container finished" podID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerID="110228966e5667219bec2c887343b02ad16ef65b59820d069c5c46876aa3afa1" exitCode=0 Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.046286 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerDied","Data":"110228966e5667219bec2c887343b02ad16ef65b59820d069c5c46876aa3afa1"} Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.046371 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerStarted","Data":"67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974"} Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.049313 4857 generic.go:334] "Generic (PLEG): container finished" podID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerID="8970124b42f6ddc6c7f0cbc43f86dc335ee77015f5bcba242093389ffa3dcd6e" exitCode=0 Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.049378 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerDied","Data":"8970124b42f6ddc6c7f0cbc43f86dc335ee77015f5bcba242093389ffa3dcd6e"} Mar 18 14:39:57 crc kubenswrapper[4857]: I0318 14:39:57.050447 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:39:59 crc kubenswrapper[4857]: I0318 14:39:59.079733 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerStarted","Data":"e2c3ef182ae0c1ec5c16c23cbdf295d368611f76297a442675af2353af73e27c"} Mar 18 14:39:59 crc kubenswrapper[4857]: I0318 14:39:59.083640 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerStarted","Data":"2eaabd575588da97498748a3b64f7ebaee818d79792b28b634ea4231bf6f2dbd"} Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.154942 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564080-xt28k"] Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.157323 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.160328 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.160863 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.163271 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.183318 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564080-xt28k"] Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.333143 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgc5j\" (UniqueName: \"kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j\") pod \"auto-csr-approver-29564080-xt28k\" (UID: \"56f08f18-917e-413d-abfe-6ab1006a460d\") " pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.436465 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgc5j\" (UniqueName: \"kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j\") pod \"auto-csr-approver-29564080-xt28k\" (UID: \"56f08f18-917e-413d-abfe-6ab1006a460d\") " pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.463614 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgc5j\" (UniqueName: \"kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j\") pod \"auto-csr-approver-29564080-xt28k\" (UID: \"56f08f18-917e-413d-abfe-6ab1006a460d\") " pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:00 crc kubenswrapper[4857]: I0318 14:40:00.486550 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:01 crc kubenswrapper[4857]: W0318 14:40:01.077713 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f08f18_917e_413d_abfe_6ab1006a460d.slice/crio-0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d WatchSource:0}: Error finding container 0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d: Status 404 returned error can't find the container with id 0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d Mar 18 14:40:01 crc kubenswrapper[4857]: I0318 14:40:01.080806 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564080-xt28k"] Mar 18 14:40:01 crc kubenswrapper[4857]: I0318 14:40:01.105952 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564080-xt28k" event={"ID":"56f08f18-917e-413d-abfe-6ab1006a460d","Type":"ContainerStarted","Data":"0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d"} Mar 18 14:40:02 crc kubenswrapper[4857]: I0318 14:40:02.184126 4857 generic.go:334] "Generic (PLEG): container finished" podID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerID="2eaabd575588da97498748a3b64f7ebaee818d79792b28b634ea4231bf6f2dbd" exitCode=0 Mar 18 14:40:02 crc kubenswrapper[4857]: I0318 14:40:02.184194 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerDied","Data":"2eaabd575588da97498748a3b64f7ebaee818d79792b28b634ea4231bf6f2dbd"} Mar 18 14:40:04 crc kubenswrapper[4857]: I0318 14:40:04.164817 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:40:04 crc kubenswrapper[4857]: E0318 14:40:04.165694 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:40:05 crc kubenswrapper[4857]: I0318 14:40:05.228081 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerDied","Data":"e2c3ef182ae0c1ec5c16c23cbdf295d368611f76297a442675af2353af73e27c"} Mar 18 14:40:05 crc kubenswrapper[4857]: I0318 14:40:05.228842 4857 generic.go:334] "Generic (PLEG): container finished" podID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerID="e2c3ef182ae0c1ec5c16c23cbdf295d368611f76297a442675af2353af73e27c" exitCode=0 Mar 18 14:40:06 crc kubenswrapper[4857]: I0318 14:40:06.246929 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerStarted","Data":"7cce9a17c12bd26ad12af60aa295397d0d4862646d17ed6cedc9ba14f56f5498"} Mar 18 14:40:06 crc kubenswrapper[4857]: I0318 14:40:06.280894 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t8jl9" podStartSLOduration=5.003967046 podStartE2EDuration="13.280871118s" podCreationTimestamp="2026-03-18 14:39:53 +0000 UTC" firstStartedPulling="2026-03-18 14:39:57.052163977 +0000 UTC m=+2381.181292444" lastFinishedPulling="2026-03-18 14:40:05.329068049 +0000 UTC m=+2389.458196516" observedRunningTime="2026-03-18 14:40:06.271174383 +0000 UTC m=+2390.400302840" watchObservedRunningTime="2026-03-18 14:40:06.280871118 +0000 UTC m=+2390.409999575" Mar 18 14:40:07 crc kubenswrapper[4857]: I0318 14:40:07.341365 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564080-xt28k" event={"ID":"56f08f18-917e-413d-abfe-6ab1006a460d","Type":"ContainerStarted","Data":"562b96ed7f135d1af671e3c0be443ca94c2239a29fe01b878b00d23df93d5f9e"} Mar 18 14:40:07 crc kubenswrapper[4857]: I0318 14:40:07.346875 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerStarted","Data":"39a1653e70c45dd96610fd7d4108b81820b981b47d04106d7ea64521785c8d13"} Mar 18 14:40:07 crc kubenswrapper[4857]: I0318 14:40:07.366453 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564080-xt28k" podStartSLOduration=2.524473508 podStartE2EDuration="7.366420511s" podCreationTimestamp="2026-03-18 14:40:00 +0000 UTC" firstStartedPulling="2026-03-18 14:40:01.080243983 +0000 UTC m=+2385.209372440" lastFinishedPulling="2026-03-18 14:40:05.922190986 +0000 UTC m=+2390.051319443" observedRunningTime="2026-03-18 14:40:07.356497922 +0000 UTC m=+2391.485626379" watchObservedRunningTime="2026-03-18 14:40:07.366420511 +0000 UTC m=+2391.495548968" Mar 18 14:40:07 crc kubenswrapper[4857]: I0318 14:40:07.380239 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swg7f" podStartSLOduration=4.361360494 podStartE2EDuration="13.380213329s" podCreationTimestamp="2026-03-18 14:39:54 +0000 UTC" firstStartedPulling="2026-03-18 14:39:57.050113795 +0000 UTC m=+2381.179242262" lastFinishedPulling="2026-03-18 14:40:06.06896664 +0000 UTC m=+2390.198095097" observedRunningTime="2026-03-18 14:40:07.376137536 +0000 UTC m=+2391.505265993" watchObservedRunningTime="2026-03-18 14:40:07.380213329 +0000 UTC m=+2391.509341786" Mar 18 14:40:08 crc kubenswrapper[4857]: I0318 14:40:08.360803 4857 generic.go:334] "Generic (PLEG): container finished" podID="56f08f18-917e-413d-abfe-6ab1006a460d" containerID="562b96ed7f135d1af671e3c0be443ca94c2239a29fe01b878b00d23df93d5f9e" exitCode=0 Mar 18 14:40:08 crc kubenswrapper[4857]: I0318 14:40:08.360960 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564080-xt28k" event={"ID":"56f08f18-917e-413d-abfe-6ab1006a460d","Type":"ContainerDied","Data":"562b96ed7f135d1af671e3c0be443ca94c2239a29fe01b878b00d23df93d5f9e"} Mar 18 14:40:09 crc kubenswrapper[4857]: I0318 14:40:09.832713 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:09 crc kubenswrapper[4857]: I0318 14:40:09.923297 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgc5j\" (UniqueName: \"kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j\") pod \"56f08f18-917e-413d-abfe-6ab1006a460d\" (UID: \"56f08f18-917e-413d-abfe-6ab1006a460d\") " Mar 18 14:40:09 crc kubenswrapper[4857]: I0318 14:40:09.929962 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j" (OuterVolumeSpecName: "kube-api-access-xgc5j") pod "56f08f18-917e-413d-abfe-6ab1006a460d" (UID: "56f08f18-917e-413d-abfe-6ab1006a460d"). InnerVolumeSpecName "kube-api-access-xgc5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.028095 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgc5j\" (UniqueName: \"kubernetes.io/projected/56f08f18-917e-413d-abfe-6ab1006a460d-kube-api-access-xgc5j\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.405254 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564080-xt28k" event={"ID":"56f08f18-917e-413d-abfe-6ab1006a460d","Type":"ContainerDied","Data":"0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d"} Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.405335 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b727120efef5c81eca97a693771b771d7c885ed8bdbfdbd5a2da1e654775e8d" Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.405411 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564080-xt28k" Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.451977 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564074-5f7nq"] Mar 18 14:40:10 crc kubenswrapper[4857]: I0318 14:40:10.468843 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564074-5f7nq"] Mar 18 14:40:11 crc kubenswrapper[4857]: I0318 14:40:11.183155 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2dd36c-2e2f-439d-89d2-444c435f7749" path="/var/lib/kubelet/pods/ff2dd36c-2e2f-439d-89d2-444c435f7749/volumes" Mar 18 14:40:11 crc kubenswrapper[4857]: I0318 14:40:11.744066 4857 scope.go:117] "RemoveContainer" containerID="f0611cf9711a6192a725046c208a601db4a23533488b88f34572aee00e808023" Mar 18 14:40:11 crc kubenswrapper[4857]: I0318 14:40:11.788476 4857 scope.go:117] "RemoveContainer" containerID="808aa0068c3f7277556505fb5d649cf50f221c456cd38a542096ae3a19529c92" Mar 18 14:40:11 crc kubenswrapper[4857]: I0318 14:40:11.898641 4857 scope.go:117] "RemoveContainer" containerID="4b41effda7dc3e901c036a53169496b4790ec2243a7e696aee78a02dd9a6bc97" Mar 18 14:40:12 crc kubenswrapper[4857]: I0318 14:40:12.000890 4857 scope.go:117] "RemoveContainer" containerID="379b4dc48d0d66124e8a359a7733da0fc144f3091d65217bb127ce424e86c197" Mar 18 14:40:12 crc kubenswrapper[4857]: I0318 14:40:12.279539 4857 scope.go:117] "RemoveContainer" containerID="6218698e97e32d7301aaaf93042d3693fece60bfe01bbc0a99f2f988998c89ac" Mar 18 14:40:12 crc kubenswrapper[4857]: I0318 14:40:12.313399 4857 scope.go:117] "RemoveContainer" containerID="be924bdfc117537af15108773f109c8ad3095151d27b4d0f9524ac83e25be840" Mar 18 14:40:12 crc kubenswrapper[4857]: I0318 14:40:12.402421 4857 scope.go:117] "RemoveContainer" containerID="149f1e15e3daf3f8f14ff6ddc3ec387b2e26a80b899edeed9016e44af707ee88" Mar 18 14:40:14 crc kubenswrapper[4857]: I0318 14:40:14.079383 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:14 crc kubenswrapper[4857]: I0318 14:40:14.080013 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:14 crc kubenswrapper[4857]: I0318 14:40:14.138251 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:15 crc kubenswrapper[4857]: I0318 14:40:15.050398 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:15 crc kubenswrapper[4857]: I0318 14:40:15.050457 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:15 crc kubenswrapper[4857]: I0318 14:40:15.123560 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:15 crc kubenswrapper[4857]: I0318 14:40:15.128197 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:15 crc kubenswrapper[4857]: I0318 14:40:15.187382 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:40:16 crc kubenswrapper[4857]: I0318 14:40:16.166698 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:16 crc kubenswrapper[4857]: I0318 14:40:16.781734 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:40:17 crc kubenswrapper[4857]: I0318 14:40:17.091325 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t8jl9" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="registry-server" containerID="cri-o://7cce9a17c12bd26ad12af60aa295397d0d4862646d17ed6cedc9ba14f56f5498" gracePeriod=2 Mar 18 14:40:17 crc kubenswrapper[4857]: I0318 14:40:17.180944 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:40:17 crc kubenswrapper[4857]: E0318 14:40:17.186499 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.113016 4857 generic.go:334] "Generic (PLEG): container finished" podID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerID="7cce9a17c12bd26ad12af60aa295397d0d4862646d17ed6cedc9ba14f56f5498" exitCode=0 Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.113485 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerDied","Data":"7cce9a17c12bd26ad12af60aa295397d0d4862646d17ed6cedc9ba14f56f5498"} Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.113673 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swg7f" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="registry-server" containerID="cri-o://39a1653e70c45dd96610fd7d4108b81820b981b47d04106d7ea64521785c8d13" gracePeriod=2 Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.505993 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.700361 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content\") pod \"e97739f6-bc7b-4071-b18b-ad85eafa4752\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.700528 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hcp5\" (UniqueName: \"kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5\") pod \"e97739f6-bc7b-4071-b18b-ad85eafa4752\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.700792 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities\") pod \"e97739f6-bc7b-4071-b18b-ad85eafa4752\" (UID: \"e97739f6-bc7b-4071-b18b-ad85eafa4752\") " Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.701870 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities" (OuterVolumeSpecName: "utilities") pod "e97739f6-bc7b-4071-b18b-ad85eafa4752" (UID: "e97739f6-bc7b-4071-b18b-ad85eafa4752"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.708545 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5" (OuterVolumeSpecName: "kube-api-access-5hcp5") pod "e97739f6-bc7b-4071-b18b-ad85eafa4752" (UID: "e97739f6-bc7b-4071-b18b-ad85eafa4752"). InnerVolumeSpecName "kube-api-access-5hcp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.728221 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e97739f6-bc7b-4071-b18b-ad85eafa4752" (UID: "e97739f6-bc7b-4071-b18b-ad85eafa4752"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.804702 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.804760 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97739f6-bc7b-4071-b18b-ad85eafa4752-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:18 crc kubenswrapper[4857]: I0318 14:40:18.804776 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hcp5\" (UniqueName: \"kubernetes.io/projected/e97739f6-bc7b-4071-b18b-ad85eafa4752-kube-api-access-5hcp5\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.155431 4857 generic.go:334] "Generic (PLEG): container finished" podID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerID="39a1653e70c45dd96610fd7d4108b81820b981b47d04106d7ea64521785c8d13" exitCode=0 Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.155527 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerDied","Data":"39a1653e70c45dd96610fd7d4108b81820b981b47d04106d7ea64521785c8d13"} Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.158433 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t8jl9" event={"ID":"e97739f6-bc7b-4071-b18b-ad85eafa4752","Type":"ContainerDied","Data":"ce84f892e2cd4cdccc54a6885c0963ebefcc4633066f235c1ddbb3ada7450298"} Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.158508 4857 scope.go:117] "RemoveContainer" containerID="7cce9a17c12bd26ad12af60aa295397d0d4862646d17ed6cedc9ba14f56f5498" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.158730 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t8jl9" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.237097 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.250498 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t8jl9"] Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.268723 4857 scope.go:117] "RemoveContainer" containerID="2eaabd575588da97498748a3b64f7ebaee818d79792b28b634ea4231bf6f2dbd" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.296537 4857 scope.go:117] "RemoveContainer" containerID="8970124b42f6ddc6c7f0cbc43f86dc335ee77015f5bcba242093389ffa3dcd6e" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.574535 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.733394 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwgdd\" (UniqueName: \"kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd\") pod \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.733478 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities\") pod \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.733736 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content\") pod \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\" (UID: \"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217\") " Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.734365 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities" (OuterVolumeSpecName: "utilities") pod "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" (UID: "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.735108 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.740019 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd" (OuterVolumeSpecName: "kube-api-access-pwgdd") pod "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" (UID: "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217"). InnerVolumeSpecName "kube-api-access-pwgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.787531 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" (UID: "7eb88ffd-c3f6-42d1-ab39-92cf64ebd217"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.837774 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwgdd\" (UniqueName: \"kubernetes.io/projected/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-kube-api-access-pwgdd\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:19 crc kubenswrapper[4857]: I0318 14:40:19.837807 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.181482 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg7f" event={"ID":"7eb88ffd-c3f6-42d1-ab39-92cf64ebd217","Type":"ContainerDied","Data":"67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974"} Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.181703 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg7f" Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.181850 4857 scope.go:117] "RemoveContainer" containerID="39a1653e70c45dd96610fd7d4108b81820b981b47d04106d7ea64521785c8d13" Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.228864 4857 scope.go:117] "RemoveContainer" containerID="e2c3ef182ae0c1ec5c16c23cbdf295d368611f76297a442675af2353af73e27c" Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.240723 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.251825 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swg7f"] Mar 18 14:40:20 crc kubenswrapper[4857]: I0318 14:40:20.259606 4857 scope.go:117] "RemoveContainer" containerID="110228966e5667219bec2c887343b02ad16ef65b59820d069c5c46876aa3afa1" Mar 18 14:40:20 crc kubenswrapper[4857]: E0318 14:40:20.410592 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb88ffd_c3f6_42d1_ab39_92cf64ebd217.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb88ffd_c3f6_42d1_ab39_92cf64ebd217.slice/crio-67ae3ed3f50195bb9107c9b797830b2cacadaba78c2a33695405b11dbcb91974\": RecentStats: unable to find data in memory cache]" Mar 18 14:40:21 crc kubenswrapper[4857]: I0318 14:40:21.038968 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-v4f6x"] Mar 18 14:40:21 crc kubenswrapper[4857]: I0318 14:40:21.336679 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" path="/var/lib/kubelet/pods/7eb88ffd-c3f6-42d1-ab39-92cf64ebd217/volumes" Mar 18 14:40:21 crc kubenswrapper[4857]: I0318 14:40:21.337480 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" path="/var/lib/kubelet/pods/e97739f6-bc7b-4071-b18b-ad85eafa4752/volumes" Mar 18 14:40:21 crc kubenswrapper[4857]: I0318 14:40:21.338305 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-v4f6x"] Mar 18 14:40:22 crc kubenswrapper[4857]: I0318 14:40:22.042058 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4cc7-account-create-update-k2fkp"] Mar 18 14:40:22 crc kubenswrapper[4857]: I0318 14:40:22.054084 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4cc7-account-create-update-k2fkp"] Mar 18 14:40:23 crc kubenswrapper[4857]: I0318 14:40:23.194313 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e242cf0-f297-425c-8bc7-f6602a60faea" path="/var/lib/kubelet/pods/9e242cf0-f297-425c-8bc7-f6602a60faea/volumes" Mar 18 14:40:23 crc kubenswrapper[4857]: I0318 14:40:23.199246 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99b0f79-9e9d-469e-90c8-3dbb0a5893fc" path="/var/lib/kubelet/pods/f99b0f79-9e9d-469e-90c8-3dbb0a5893fc/volumes" Mar 18 14:40:24 crc kubenswrapper[4857]: I0318 14:40:24.068251 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h7rnb"] Mar 18 14:40:24 crc kubenswrapper[4857]: I0318 14:40:24.084024 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-h7rnb"] Mar 18 14:40:25 crc kubenswrapper[4857]: I0318 14:40:25.202737 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48c014c-1b70-4d0c-b01b-9c1060620b0e" path="/var/lib/kubelet/pods/a48c014c-1b70-4d0c-b01b-9c1060620b0e/volumes" Mar 18 14:40:29 crc kubenswrapper[4857]: I0318 14:40:29.782681 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:40:29 crc kubenswrapper[4857]: E0318 14:40:29.784221 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:40:40 crc kubenswrapper[4857]: I0318 14:40:40.165193 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:40:40 crc kubenswrapper[4857]: E0318 14:40:40.166970 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:40:52 crc kubenswrapper[4857]: I0318 14:40:52.164999 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:40:52 crc kubenswrapper[4857]: E0318 14:40:52.165932 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:40:59 crc kubenswrapper[4857]: I0318 14:40:59.076015 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pxxc5"] Mar 18 14:40:59 crc kubenswrapper[4857]: I0318 14:40:59.086171 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pxxc5"] Mar 18 14:40:59 crc kubenswrapper[4857]: I0318 14:40:59.183991 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b95917b-c40b-4bb7-8064-4d297f45711d" path="/var/lib/kubelet/pods/0b95917b-c40b-4bb7-8064-4d297f45711d/volumes" Mar 18 14:41:01 crc kubenswrapper[4857]: I0318 14:41:01.047587 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xv86z"] Mar 18 14:41:01 crc kubenswrapper[4857]: I0318 14:41:01.063150 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xv86z"] Mar 18 14:41:01 crc kubenswrapper[4857]: I0318 14:41:01.568145 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdab76a8-c643-44eb-8fe5-7fd0ab42f634" path="/var/lib/kubelet/pods/fdab76a8-c643-44eb-8fe5-7fd0ab42f634/volumes" Mar 18 14:41:03 crc kubenswrapper[4857]: I0318 14:41:03.163887 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:41:03 crc kubenswrapper[4857]: E0318 14:41:03.164496 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:41:07 crc kubenswrapper[4857]: I0318 14:41:07.665247 4857 generic.go:334] "Generic (PLEG): container finished" podID="da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" containerID="81873b4e67f96aca10f841f0c9d48a91afdbca8b45e00be6195d28782bd703df" exitCode=0 Mar 18 14:41:07 crc kubenswrapper[4857]: I0318 14:41:07.665384 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" event={"ID":"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d","Type":"ContainerDied","Data":"81873b4e67f96aca10f841f0c9d48a91afdbca8b45e00be6195d28782bd703df"} Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.667989 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.695053 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" event={"ID":"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d","Type":"ContainerDied","Data":"e4934155b4c4c38253139dc7fbe4266c218f995d5fef6def3d479339be940dec"} Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.695103 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4934155b4c4c38253139dc7fbe4266c218f995d5fef6def3d479339be940dec" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.695177 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vlpss" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.759385 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8dx\" (UniqueName: \"kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx\") pod \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.759789 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam\") pod \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.759937 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory\") pod \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\" (UID: \"da0fb4e6-9c13-42e7-8771-3f0fc9d2045d\") " Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.773202 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx" (OuterVolumeSpecName: "kube-api-access-ls8dx") pod "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" (UID: "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d"). InnerVolumeSpecName "kube-api-access-ls8dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.807172 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory" (OuterVolumeSpecName: "inventory") pod "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" (UID: "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.832995 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" (UID: "da0fb4e6-9c13-42e7-8771-3f0fc9d2045d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.847068 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst"] Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.847927 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848026 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848115 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="extract-content" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848182 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="extract-content" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848264 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="extract-content" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848334 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="extract-content" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848423 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848488 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848572 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="extract-utilities" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848630 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="extract-utilities" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848696 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="extract-utilities" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848778 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="extract-utilities" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.848856 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.848911 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: E0318 14:41:09.849017 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f08f18-917e-413d-abfe-6ab1006a460d" containerName="oc" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.849075 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f08f18-917e-413d-abfe-6ab1006a460d" containerName="oc" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.849475 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f08f18-917e-413d-abfe-6ab1006a460d" containerName="oc" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.849570 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97739f6-bc7b-4071-b18b-ad85eafa4752" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.849638 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0fb4e6-9c13-42e7-8771-3f0fc9d2045d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.849708 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb88ffd-c3f6-42d1-ab39-92cf64ebd217" containerName="registry-server" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.850897 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.859304 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst"] Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.862451 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk7t\" (UniqueName: \"kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.862586 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.862931 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.863124 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8dx\" (UniqueName: \"kubernetes.io/projected/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-kube-api-access-ls8dx\") on node \"crc\" DevicePath \"\"" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.863153 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.863169 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da0fb4e6-9c13-42e7-8771-3f0fc9d2045d-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.965271 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.965728 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk7t\" (UniqueName: \"kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.965792 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.975044 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.975629 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:09 crc kubenswrapper[4857]: I0318 14:41:09.986597 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk7t\" (UniqueName: \"kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2dvst\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:10 crc kubenswrapper[4857]: I0318 14:41:10.250889 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:41:11 crc kubenswrapper[4857]: I0318 14:41:11.089029 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst"] Mar 18 14:41:11 crc kubenswrapper[4857]: I0318 14:41:11.719526 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" event={"ID":"e6c1f7fd-57a6-4598-8ea2-6986be701e93","Type":"ContainerStarted","Data":"30e5ce882f62c1170516cfc44d9534c1786f9406ac301e6d8cf22ac06081518c"} Mar 18 14:41:12 crc kubenswrapper[4857]: I0318 14:41:12.964567 4857 scope.go:117] "RemoveContainer" containerID="49bbc9ac34c00e20ba6e4558560eedf0334afbf8faf6fb5efbe5ac367c8d9ac8" Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.003353 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" event={"ID":"e6c1f7fd-57a6-4598-8ea2-6986be701e93","Type":"ContainerStarted","Data":"3cfd97ec6e7283be06154dcc87b82503b838352f9050e4aba727acaee945ff0c"} Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.024450 4857 scope.go:117] "RemoveContainer" containerID="3f63307fe76eb7aff1a9297411edb40614fbf2e0d0c93a40fb6eb2bf63d20c99" Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.044150 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" podStartSLOduration=3.568491452 podStartE2EDuration="4.044125035s" podCreationTimestamp="2026-03-18 14:41:09 +0000 UTC" firstStartedPulling="2026-03-18 14:41:11.092625047 +0000 UTC m=+2455.221753494" lastFinishedPulling="2026-03-18 14:41:11.56825862 +0000 UTC m=+2455.697387077" observedRunningTime="2026-03-18 14:41:13.027165767 +0000 UTC m=+2457.156294224" watchObservedRunningTime="2026-03-18 14:41:13.044125035 +0000 UTC m=+2457.173253492" Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.104085 4857 scope.go:117] "RemoveContainer" containerID="c31ab01511e35915445a574b0de906d965ac4f6132f2aa90ccbbc015f2e45e79" Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.161509 4857 scope.go:117] "RemoveContainer" containerID="1b54f4453b663088b33f58cb6d42727ccb556d4b30e9ae03b4455ae278cd5a0a" Mar 18 14:41:13 crc kubenswrapper[4857]: I0318 14:41:13.215475 4857 scope.go:117] "RemoveContainer" containerID="a0f655471d1e1343ce3fc487195d7d991fee99f4c39fb94478d8c4f416cd9e51" Mar 18 14:41:14 crc kubenswrapper[4857]: I0318 14:41:14.164066 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:41:14 crc kubenswrapper[4857]: E0318 14:41:14.164843 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:41:28 crc kubenswrapper[4857]: I0318 14:41:28.164555 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:41:28 crc kubenswrapper[4857]: E0318 14:41:28.165533 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:41:42 crc kubenswrapper[4857]: I0318 14:41:42.346653 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:41:42 crc kubenswrapper[4857]: E0318 14:41:42.358060 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:41:55 crc kubenswrapper[4857]: I0318 14:41:55.165830 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:41:55 crc kubenswrapper[4857]: E0318 14:41:55.166988 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:41:57 crc kubenswrapper[4857]: I0318 14:41:57.074593 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-c7gq9"] Mar 18 14:41:57 crc kubenswrapper[4857]: I0318 14:41:57.090113 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-c7gq9"] Mar 18 14:41:57 crc kubenswrapper[4857]: I0318 14:41:57.478328 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb2ffad-1202-49a7-8129-1ce2ca433b2c" path="/var/lib/kubelet/pods/1fb2ffad-1202-49a7-8129-1ce2ca433b2c/volumes" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.163806 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564082-52c5z"] Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.166485 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.169504 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.169839 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.172441 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.176727 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564082-52c5z"] Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.341899 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphhn\" (UniqueName: \"kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn\") pod \"auto-csr-approver-29564082-52c5z\" (UID: \"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48\") " pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.445117 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphhn\" (UniqueName: \"kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn\") pod \"auto-csr-approver-29564082-52c5z\" (UID: \"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48\") " pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.470948 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphhn\" (UniqueName: \"kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn\") pod \"auto-csr-approver-29564082-52c5z\" (UID: \"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48\") " pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:00 crc kubenswrapper[4857]: I0318 14:42:00.495716 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:01 crc kubenswrapper[4857]: I0318 14:42:01.027033 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564082-52c5z"] Mar 18 14:42:01 crc kubenswrapper[4857]: I0318 14:42:01.963564 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564082-52c5z" event={"ID":"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48","Type":"ContainerStarted","Data":"d4d61d1a4dd42a904eb72658eb4a6297661d5b9af0cc938c5f3347a14bf7d29d"} Mar 18 14:42:04 crc kubenswrapper[4857]: I0318 14:42:04.003967 4857 generic.go:334] "Generic (PLEG): container finished" podID="5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" containerID="6864d20b874427a334b87984c21b401cc9f1609c6e00009c962fa3f26bf65a02" exitCode=0 Mar 18 14:42:04 crc kubenswrapper[4857]: I0318 14:42:04.004072 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564082-52c5z" event={"ID":"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48","Type":"ContainerDied","Data":"6864d20b874427a334b87984c21b401cc9f1609c6e00009c962fa3f26bf65a02"} Mar 18 14:42:05 crc kubenswrapper[4857]: I0318 14:42:05.698102 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:05 crc kubenswrapper[4857]: I0318 14:42:05.846715 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphhn\" (UniqueName: \"kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn\") pod \"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48\" (UID: \"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48\") " Mar 18 14:42:05 crc kubenswrapper[4857]: I0318 14:42:05.854166 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn" (OuterVolumeSpecName: "kube-api-access-xphhn") pod "5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" (UID: "5ac8ea03-7f51-4f69-ba7d-c4cf41769b48"). InnerVolumeSpecName "kube-api-access-xphhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:42:05 crc kubenswrapper[4857]: I0318 14:42:05.952252 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphhn\" (UniqueName: \"kubernetes.io/projected/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48-kube-api-access-xphhn\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.027111 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564082-52c5z" Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.027090 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564082-52c5z" event={"ID":"5ac8ea03-7f51-4f69-ba7d-c4cf41769b48","Type":"ContainerDied","Data":"d4d61d1a4dd42a904eb72658eb4a6297661d5b9af0cc938c5f3347a14bf7d29d"} Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.027349 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d61d1a4dd42a904eb72658eb4a6297661d5b9af0cc938c5f3347a14bf7d29d" Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.165072 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:42:06 crc kubenswrapper[4857]: E0318 14:42:06.165586 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.774136 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564076-xlvb5"] Mar 18 14:42:06 crc kubenswrapper[4857]: I0318 14:42:06.788060 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564076-xlvb5"] Mar 18 14:42:07 crc kubenswrapper[4857]: I0318 14:42:07.185762 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b83a605-c328-47c0-bada-aa7a6f12bfaf" path="/var/lib/kubelet/pods/8b83a605-c328-47c0-bada-aa7a6f12bfaf/volumes" Mar 18 14:42:13 crc kubenswrapper[4857]: I0318 14:42:13.481408 4857 scope.go:117] "RemoveContainer" containerID="01f565ea7b203e65660c00471b9a10748263428433993345f589cdf26c537c11" Mar 18 14:42:13 crc kubenswrapper[4857]: I0318 14:42:13.890068 4857 scope.go:117] "RemoveContainer" containerID="6331644b7650bd87f6a0d1e7f180675cd7dcd33ce8b7d2b421c5dada73793682" Mar 18 14:42:17 crc kubenswrapper[4857]: I0318 14:42:17.172024 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:42:17 crc kubenswrapper[4857]: E0318 14:42:17.172985 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:42:24 crc kubenswrapper[4857]: I0318 14:42:24.438195 4857 generic.go:334] "Generic (PLEG): container finished" podID="e6c1f7fd-57a6-4598-8ea2-6986be701e93" containerID="3cfd97ec6e7283be06154dcc87b82503b838352f9050e4aba727acaee945ff0c" exitCode=0 Mar 18 14:42:24 crc kubenswrapper[4857]: I0318 14:42:24.438263 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" event={"ID":"e6c1f7fd-57a6-4598-8ea2-6986be701e93","Type":"ContainerDied","Data":"3cfd97ec6e7283be06154dcc87b82503b838352f9050e4aba727acaee945ff0c"} Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.205389 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.355480 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory\") pod \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.355877 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam\") pod \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.355931 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqk7t\" (UniqueName: \"kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t\") pod \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\" (UID: \"e6c1f7fd-57a6-4598-8ea2-6986be701e93\") " Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.363218 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t" (OuterVolumeSpecName: "kube-api-access-kqk7t") pod "e6c1f7fd-57a6-4598-8ea2-6986be701e93" (UID: "e6c1f7fd-57a6-4598-8ea2-6986be701e93"). InnerVolumeSpecName "kube-api-access-kqk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.397450 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory" (OuterVolumeSpecName: "inventory") pod "e6c1f7fd-57a6-4598-8ea2-6986be701e93" (UID: "e6c1f7fd-57a6-4598-8ea2-6986be701e93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.409000 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6c1f7fd-57a6-4598-8ea2-6986be701e93" (UID: "e6c1f7fd-57a6-4598-8ea2-6986be701e93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.460061 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.460110 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqk7t\" (UniqueName: \"kubernetes.io/projected/e6c1f7fd-57a6-4598-8ea2-6986be701e93-kube-api-access-kqk7t\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.460126 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6c1f7fd-57a6-4598-8ea2-6986be701e93-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.467776 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" event={"ID":"e6c1f7fd-57a6-4598-8ea2-6986be701e93","Type":"ContainerDied","Data":"30e5ce882f62c1170516cfc44d9534c1786f9406ac301e6d8cf22ac06081518c"} Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.467821 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e5ce882f62c1170516cfc44d9534c1786f9406ac301e6d8cf22ac06081518c" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.467834 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2dvst" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.870913 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj"] Mar 18 14:42:26 crc kubenswrapper[4857]: E0318 14:42:26.872339 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c1f7fd-57a6-4598-8ea2-6986be701e93" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.872470 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c1f7fd-57a6-4598-8ea2-6986be701e93" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:26 crc kubenswrapper[4857]: E0318 14:42:26.872605 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" containerName="oc" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.872689 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" containerName="oc" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.873194 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c1f7fd-57a6-4598-8ea2-6986be701e93" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.873336 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" containerName="oc" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.874903 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.888422 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.888421 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.888422 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.888577 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.914132 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj"] Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.974729 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.974841 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bct7f\" (UniqueName: \"kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:26 crc kubenswrapper[4857]: I0318 14:42:26.974894 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.078193 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.078275 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bct7f\" (UniqueName: \"kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.078328 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.085674 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.087391 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.096891 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bct7f\" (UniqueName: \"kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.210334 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:27 crc kubenswrapper[4857]: I0318 14:42:27.774579 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj"] Mar 18 14:42:28 crc kubenswrapper[4857]: I0318 14:42:28.627059 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" event={"ID":"1360813c-f243-4286-b916-f690b79bd637","Type":"ContainerStarted","Data":"5692aaa2836134b2d5709e88baa9d18c600d2241db3532d8cff1c24b802a9f42"} Mar 18 14:42:29 crc kubenswrapper[4857]: I0318 14:42:29.642366 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" event={"ID":"1360813c-f243-4286-b916-f690b79bd637","Type":"ContainerStarted","Data":"334005c1851a25f12f611142e4a8939237d215d7db85e9fcc71f82fe709af804"} Mar 18 14:42:29 crc kubenswrapper[4857]: I0318 14:42:29.668872 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" podStartSLOduration=2.8247806410000003 podStartE2EDuration="3.668765053s" podCreationTimestamp="2026-03-18 14:42:26 +0000 UTC" firstStartedPulling="2026-03-18 14:42:27.782164423 +0000 UTC m=+2531.911292880" lastFinishedPulling="2026-03-18 14:42:28.626148835 +0000 UTC m=+2532.755277292" observedRunningTime="2026-03-18 14:42:29.662083194 +0000 UTC m=+2533.791211661" watchObservedRunningTime="2026-03-18 14:42:29.668765053 +0000 UTC m=+2533.797893510" Mar 18 14:42:31 crc kubenswrapper[4857]: I0318 14:42:31.166878 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:42:31 crc kubenswrapper[4857]: E0318 14:42:31.168003 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:42:33 crc kubenswrapper[4857]: I0318 14:42:33.214287 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:42:36 crc kubenswrapper[4857]: I0318 14:42:36.501029 4857 generic.go:334] "Generic (PLEG): container finished" podID="1360813c-f243-4286-b916-f690b79bd637" containerID="334005c1851a25f12f611142e4a8939237d215d7db85e9fcc71f82fe709af804" exitCode=0 Mar 18 14:42:36 crc kubenswrapper[4857]: I0318 14:42:36.501100 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" event={"ID":"1360813c-f243-4286-b916-f690b79bd637","Type":"ContainerDied","Data":"334005c1851a25f12f611142e4a8939237d215d7db85e9fcc71f82fe709af804"} Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.271342 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.447044 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam\") pod \"1360813c-f243-4286-b916-f690b79bd637\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.447118 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bct7f\" (UniqueName: \"kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f\") pod \"1360813c-f243-4286-b916-f690b79bd637\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.447238 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory\") pod \"1360813c-f243-4286-b916-f690b79bd637\" (UID: \"1360813c-f243-4286-b916-f690b79bd637\") " Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.459019 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f" (OuterVolumeSpecName: "kube-api-access-bct7f") pod "1360813c-f243-4286-b916-f690b79bd637" (UID: "1360813c-f243-4286-b916-f690b79bd637"). InnerVolumeSpecName "kube-api-access-bct7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.485470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1360813c-f243-4286-b916-f690b79bd637" (UID: "1360813c-f243-4286-b916-f690b79bd637"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.490340 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory" (OuterVolumeSpecName: "inventory") pod "1360813c-f243-4286-b916-f690b79bd637" (UID: "1360813c-f243-4286-b916-f690b79bd637"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.546424 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" event={"ID":"1360813c-f243-4286-b916-f690b79bd637","Type":"ContainerDied","Data":"5692aaa2836134b2d5709e88baa9d18c600d2241db3532d8cff1c24b802a9f42"} Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.546504 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5692aaa2836134b2d5709e88baa9d18c600d2241db3532d8cff1c24b802a9f42" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.546621 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.554438 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.554493 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bct7f\" (UniqueName: \"kubernetes.io/projected/1360813c-f243-4286-b916-f690b79bd637-kube-api-access-bct7f\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.554509 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1360813c-f243-4286-b916-f690b79bd637-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.660797 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn"] Mar 18 14:42:38 crc kubenswrapper[4857]: E0318 14:42:38.661791 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1360813c-f243-4286-b916-f690b79bd637" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.661826 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1360813c-f243-4286-b916-f690b79bd637" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.662146 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1360813c-f243-4286-b916-f690b79bd637" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.663349 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.666839 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.667009 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.667134 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.668375 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.670487 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn"] Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.761455 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.762247 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bg5w\" (UniqueName: \"kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.762421 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.864988 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.865290 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bg5w\" (UniqueName: \"kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.865406 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.871714 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.875320 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:38 crc kubenswrapper[4857]: I0318 14:42:38.890795 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bg5w\" (UniqueName: \"kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jkwbn\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:39 crc kubenswrapper[4857]: I0318 14:42:39.139151 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:42:39 crc kubenswrapper[4857]: I0318 14:42:39.786009 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn"] Mar 18 14:42:40 crc kubenswrapper[4857]: I0318 14:42:40.860557 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" event={"ID":"de89a7e5-ef74-441a-8af0-c8879e1bebdb","Type":"ContainerStarted","Data":"29dc3eedb296435d8b9c4a4b40af22e7c4d08be5f426ad19b7a55a4704a526fc"} Mar 18 14:42:41 crc kubenswrapper[4857]: I0318 14:42:41.873189 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" event={"ID":"de89a7e5-ef74-441a-8af0-c8879e1bebdb","Type":"ContainerStarted","Data":"f265f8fd96617ea0f92e2d3079883db58495f9c474fc79f4f9a5459c07ca24fd"} Mar 18 14:42:41 crc kubenswrapper[4857]: I0318 14:42:41.894408 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" podStartSLOduration=3.177912543 podStartE2EDuration="3.894375263s" podCreationTimestamp="2026-03-18 14:42:38 +0000 UTC" firstStartedPulling="2026-03-18 14:42:39.786176659 +0000 UTC m=+2543.915305116" lastFinishedPulling="2026-03-18 14:42:40.502639349 +0000 UTC m=+2544.631767836" observedRunningTime="2026-03-18 14:42:41.889584002 +0000 UTC m=+2546.018712469" watchObservedRunningTime="2026-03-18 14:42:41.894375263 +0000 UTC m=+2546.023503710" Mar 18 14:42:45 crc kubenswrapper[4857]: I0318 14:42:45.164560 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:42:45 crc kubenswrapper[4857]: E0318 14:42:45.165510 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:42:59 crc kubenswrapper[4857]: I0318 14:42:59.164623 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:42:59 crc kubenswrapper[4857]: E0318 14:42:59.165641 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:43:13 crc kubenswrapper[4857]: I0318 14:43:13.163744 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:43:13 crc kubenswrapper[4857]: E0318 14:43:13.164562 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:43:22 crc kubenswrapper[4857]: I0318 14:43:22.811329 4857 generic.go:334] "Generic (PLEG): container finished" podID="de89a7e5-ef74-441a-8af0-c8879e1bebdb" containerID="f265f8fd96617ea0f92e2d3079883db58495f9c474fc79f4f9a5459c07ca24fd" exitCode=0 Mar 18 14:43:22 crc kubenswrapper[4857]: I0318 14:43:22.811484 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" event={"ID":"de89a7e5-ef74-441a-8af0-c8879e1bebdb","Type":"ContainerDied","Data":"f265f8fd96617ea0f92e2d3079883db58495f9c474fc79f4f9a5459c07ca24fd"} Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.435224 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.537773 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam\") pod \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.537996 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory\") pod \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.538179 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bg5w\" (UniqueName: \"kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w\") pod \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\" (UID: \"de89a7e5-ef74-441a-8af0-c8879e1bebdb\") " Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.557731 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w" (OuterVolumeSpecName: "kube-api-access-6bg5w") pod "de89a7e5-ef74-441a-8af0-c8879e1bebdb" (UID: "de89a7e5-ef74-441a-8af0-c8879e1bebdb"). InnerVolumeSpecName "kube-api-access-6bg5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.591490 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de89a7e5-ef74-441a-8af0-c8879e1bebdb" (UID: "de89a7e5-ef74-441a-8af0-c8879e1bebdb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.631140 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory" (OuterVolumeSpecName: "inventory") pod "de89a7e5-ef74-441a-8af0-c8879e1bebdb" (UID: "de89a7e5-ef74-441a-8af0-c8879e1bebdb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.645729 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.645787 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de89a7e5-ef74-441a-8af0-c8879e1bebdb-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.645800 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bg5w\" (UniqueName: \"kubernetes.io/projected/de89a7e5-ef74-441a-8af0-c8879e1bebdb-kube-api-access-6bg5w\") on node \"crc\" DevicePath \"\"" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.837196 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" event={"ID":"de89a7e5-ef74-441a-8af0-c8879e1bebdb","Type":"ContainerDied","Data":"29dc3eedb296435d8b9c4a4b40af22e7c4d08be5f426ad19b7a55a4704a526fc"} Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.837242 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29dc3eedb296435d8b9c4a4b40af22e7c4d08be5f426ad19b7a55a4704a526fc" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.837408 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jkwbn" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.945877 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs"] Mar 18 14:43:24 crc kubenswrapper[4857]: E0318 14:43:24.946640 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de89a7e5-ef74-441a-8af0-c8879e1bebdb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.946680 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="de89a7e5-ef74-441a-8af0-c8879e1bebdb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.947111 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="de89a7e5-ef74-441a-8af0-c8879e1bebdb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.948522 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.951822 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.953336 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.954346 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.958031 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs"] Mar 18 14:43:24 crc kubenswrapper[4857]: I0318 14:43:24.958106 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.058117 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.058569 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.058842 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45t5b\" (UniqueName: \"kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.161273 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45t5b\" (UniqueName: \"kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.161736 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.161820 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.166870 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.169206 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.182091 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45t5b\" (UniqueName: \"kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.287096 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:43:25 crc kubenswrapper[4857]: I0318 14:43:25.895935 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs"] Mar 18 14:43:26 crc kubenswrapper[4857]: I0318 14:43:26.872050 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" event={"ID":"332600d9-3b78-4b64-8cb2-97fbc6832fc4","Type":"ContainerStarted","Data":"939c7cf6a60c2194cdc4f17fbb57f49ee61798855ac5967ce4ab058fc2677477"} Mar 18 14:43:26 crc kubenswrapper[4857]: I0318 14:43:26.872361 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" event={"ID":"332600d9-3b78-4b64-8cb2-97fbc6832fc4","Type":"ContainerStarted","Data":"498ba634529306e06a52d453a43e0e5019336e47ae18115876bf1dfc27efac09"} Mar 18 14:43:26 crc kubenswrapper[4857]: I0318 14:43:26.904352 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" podStartSLOduration=2.31853831 podStartE2EDuration="2.904324498s" podCreationTimestamp="2026-03-18 14:43:24 +0000 UTC" firstStartedPulling="2026-03-18 14:43:25.917130557 +0000 UTC m=+2590.046259014" lastFinishedPulling="2026-03-18 14:43:26.502916745 +0000 UTC m=+2590.632045202" observedRunningTime="2026-03-18 14:43:26.892038759 +0000 UTC m=+2591.021167216" watchObservedRunningTime="2026-03-18 14:43:26.904324498 +0000 UTC m=+2591.033452955" Mar 18 14:43:28 crc kubenswrapper[4857]: I0318 14:43:28.168976 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:43:28 crc kubenswrapper[4857]: E0318 14:43:28.169597 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:43:40 crc kubenswrapper[4857]: I0318 14:43:40.165119 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:43:40 crc kubenswrapper[4857]: E0318 14:43:40.166184 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:43:53 crc kubenswrapper[4857]: I0318 14:43:53.163872 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:43:53 crc kubenswrapper[4857]: E0318 14:43:53.164833 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.158708 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564084-27q8j"] Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.161088 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.164057 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.164061 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.164660 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.177582 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564084-27q8j"] Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.178799 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gzh\" (UniqueName: \"kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh\") pod \"auto-csr-approver-29564084-27q8j\" (UID: \"b1e93ceb-db89-4e04-8d42-d598ad3d8579\") " pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.282143 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68gzh\" (UniqueName: \"kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh\") pod \"auto-csr-approver-29564084-27q8j\" (UID: \"b1e93ceb-db89-4e04-8d42-d598ad3d8579\") " pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.306123 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68gzh\" (UniqueName: \"kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh\") pod \"auto-csr-approver-29564084-27q8j\" (UID: \"b1e93ceb-db89-4e04-8d42-d598ad3d8579\") " pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:00 crc kubenswrapper[4857]: I0318 14:44:00.492442 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:01 crc kubenswrapper[4857]: I0318 14:44:01.012880 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564084-27q8j"] Mar 18 14:44:01 crc kubenswrapper[4857]: I0318 14:44:01.400544 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564084-27q8j" event={"ID":"b1e93ceb-db89-4e04-8d42-d598ad3d8579","Type":"ContainerStarted","Data":"e837121024cf9d5f2e2e8a2de215dd7f2f6ba5897117a34b50e46318b9a453e8"} Mar 18 14:44:03 crc kubenswrapper[4857]: I0318 14:44:03.426741 4857 generic.go:334] "Generic (PLEG): container finished" podID="b1e93ceb-db89-4e04-8d42-d598ad3d8579" containerID="303eecf131d7b3cbd1ee56267851f15603fb6a9c3922c3f2dcfdcdd7d7cd1d28" exitCode=0 Mar 18 14:44:03 crc kubenswrapper[4857]: I0318 14:44:03.426867 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564084-27q8j" event={"ID":"b1e93ceb-db89-4e04-8d42-d598ad3d8579","Type":"ContainerDied","Data":"303eecf131d7b3cbd1ee56267851f15603fb6a9c3922c3f2dcfdcdd7d7cd1d28"} Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.060503 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.125393 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68gzh\" (UniqueName: \"kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh\") pod \"b1e93ceb-db89-4e04-8d42-d598ad3d8579\" (UID: \"b1e93ceb-db89-4e04-8d42-d598ad3d8579\") " Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.132286 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh" (OuterVolumeSpecName: "kube-api-access-68gzh") pod "b1e93ceb-db89-4e04-8d42-d598ad3d8579" (UID: "b1e93ceb-db89-4e04-8d42-d598ad3d8579"). InnerVolumeSpecName "kube-api-access-68gzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.164007 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:44:05 crc kubenswrapper[4857]: E0318 14:44:05.164500 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.231797 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68gzh\" (UniqueName: \"kubernetes.io/projected/b1e93ceb-db89-4e04-8d42-d598ad3d8579-kube-api-access-68gzh\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.489337 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564084-27q8j" event={"ID":"b1e93ceb-db89-4e04-8d42-d598ad3d8579","Type":"ContainerDied","Data":"e837121024cf9d5f2e2e8a2de215dd7f2f6ba5897117a34b50e46318b9a453e8"} Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.489711 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e837121024cf9d5f2e2e8a2de215dd7f2f6ba5897117a34b50e46318b9a453e8" Mar 18 14:44:05 crc kubenswrapper[4857]: I0318 14:44:05.489388 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564084-27q8j" Mar 18 14:44:06 crc kubenswrapper[4857]: I0318 14:44:06.146514 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564078-x6f6t"] Mar 18 14:44:06 crc kubenswrapper[4857]: I0318 14:44:06.157847 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564078-x6f6t"] Mar 18 14:44:07 crc kubenswrapper[4857]: I0318 14:44:07.189055 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a141f85e-43a2-4026-84b0-7d24012494f7" path="/var/lib/kubelet/pods/a141f85e-43a2-4026-84b0-7d24012494f7/volumes" Mar 18 14:44:10 crc kubenswrapper[4857]: I0318 14:44:10.042620 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-sjzwc"] Mar 18 14:44:10 crc kubenswrapper[4857]: I0318 14:44:10.055524 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-sjzwc"] Mar 18 14:44:11 crc kubenswrapper[4857]: I0318 14:44:11.181682 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b19cf8-b3a5-41a0-b839-ec48b892ee5e" path="/var/lib/kubelet/pods/d1b19cf8-b3a5-41a0-b839-ec48b892ee5e/volumes" Mar 18 14:44:14 crc kubenswrapper[4857]: I0318 14:44:14.121301 4857 scope.go:117] "RemoveContainer" containerID="e56ed5a3b37ce824ca741373a9946d86c6f81ccdee356641c9c70adcc59a293f" Mar 18 14:44:14 crc kubenswrapper[4857]: I0318 14:44:14.207739 4857 scope.go:117] "RemoveContainer" containerID="33d5a7221a6c74b334204ec059565087dc8d0fa98bb8562d10c7cf520cd07530" Mar 18 14:44:16 crc kubenswrapper[4857]: I0318 14:44:16.164006 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:44:16 crc kubenswrapper[4857]: E0318 14:44:16.164701 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:44:23 crc kubenswrapper[4857]: I0318 14:44:23.267615 4857 generic.go:334] "Generic (PLEG): container finished" podID="332600d9-3b78-4b64-8cb2-97fbc6832fc4" containerID="939c7cf6a60c2194cdc4f17fbb57f49ee61798855ac5967ce4ab058fc2677477" exitCode=0 Mar 18 14:44:23 crc kubenswrapper[4857]: I0318 14:44:23.267701 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" event={"ID":"332600d9-3b78-4b64-8cb2-97fbc6832fc4","Type":"ContainerDied","Data":"939c7cf6a60c2194cdc4f17fbb57f49ee61798855ac5967ce4ab058fc2677477"} Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.817037 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.850021 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam\") pod \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.850089 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory\") pod \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.850297 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45t5b\" (UniqueName: \"kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b\") pod \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\" (UID: \"332600d9-3b78-4b64-8cb2-97fbc6832fc4\") " Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.863352 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b" (OuterVolumeSpecName: "kube-api-access-45t5b") pod "332600d9-3b78-4b64-8cb2-97fbc6832fc4" (UID: "332600d9-3b78-4b64-8cb2-97fbc6832fc4"). InnerVolumeSpecName "kube-api-access-45t5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.896880 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory" (OuterVolumeSpecName: "inventory") pod "332600d9-3b78-4b64-8cb2-97fbc6832fc4" (UID: "332600d9-3b78-4b64-8cb2-97fbc6832fc4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.898571 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "332600d9-3b78-4b64-8cb2-97fbc6832fc4" (UID: "332600d9-3b78-4b64-8cb2-97fbc6832fc4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.952619 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.952940 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/332600d9-3b78-4b64-8cb2-97fbc6832fc4-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:24 crc kubenswrapper[4857]: I0318 14:44:24.953060 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45t5b\" (UniqueName: \"kubernetes.io/projected/332600d9-3b78-4b64-8cb2-97fbc6832fc4-kube-api-access-45t5b\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:25 crc kubenswrapper[4857]: I0318 14:44:25.315968 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" event={"ID":"332600d9-3b78-4b64-8cb2-97fbc6832fc4","Type":"ContainerDied","Data":"498ba634529306e06a52d453a43e0e5019336e47ae18115876bf1dfc27efac09"} Mar 18 14:44:25 crc kubenswrapper[4857]: I0318 14:44:25.316033 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs" Mar 18 14:44:25 crc kubenswrapper[4857]: I0318 14:44:25.316037 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="498ba634529306e06a52d453a43e0e5019336e47ae18115876bf1dfc27efac09" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.538573 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fkrm8"] Mar 18 14:44:26 crc kubenswrapper[4857]: E0318 14:44:26.540583 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e93ceb-db89-4e04-8d42-d598ad3d8579" containerName="oc" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.540612 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e93ceb-db89-4e04-8d42-d598ad3d8579" containerName="oc" Mar 18 14:44:26 crc kubenswrapper[4857]: E0318 14:44:26.540662 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332600d9-3b78-4b64-8cb2-97fbc6832fc4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.540673 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="332600d9-3b78-4b64-8cb2-97fbc6832fc4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.541112 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="332600d9-3b78-4b64-8cb2-97fbc6832fc4" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.541172 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e93ceb-db89-4e04-8d42-d598ad3d8579" containerName="oc" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.544216 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.548125 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.548175 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.548196 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.552487 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.563127 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fkrm8"] Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.894678 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjl2\" (UniqueName: \"kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.894838 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.894954 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.997525 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.997940 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjl2\" (UniqueName: \"kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:26 crc kubenswrapper[4857]: I0318 14:44:26.998004 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:27 crc kubenswrapper[4857]: I0318 14:44:27.004434 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:27 crc kubenswrapper[4857]: I0318 14:44:27.017228 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:27 crc kubenswrapper[4857]: I0318 14:44:27.023689 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjl2\" (UniqueName: \"kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2\") pod \"ssh-known-hosts-edpm-deployment-fkrm8\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:27 crc kubenswrapper[4857]: I0318 14:44:27.168934 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:27 crc kubenswrapper[4857]: I0318 14:44:27.800888 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fkrm8"] Mar 18 14:44:28 crc kubenswrapper[4857]: I0318 14:44:28.359647 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" event={"ID":"71a9b71f-4dfc-49da-9953-2d1739ff480a","Type":"ContainerStarted","Data":"8070d4a5068890764ca9fb284ea8b87416640f8a789499c1f21d4a72060cb874"} Mar 18 14:44:29 crc kubenswrapper[4857]: I0318 14:44:29.201723 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:44:29 crc kubenswrapper[4857]: I0318 14:44:29.422177 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" event={"ID":"71a9b71f-4dfc-49da-9953-2d1739ff480a","Type":"ContainerStarted","Data":"2aef29a64852f3fc8a15a380860a8e33fcb995aefe2f9aa897cc08980d47379c"} Mar 18 14:44:29 crc kubenswrapper[4857]: I0318 14:44:29.466791 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" podStartSLOduration=2.939173835 podStartE2EDuration="3.466737936s" podCreationTimestamp="2026-03-18 14:44:26 +0000 UTC" firstStartedPulling="2026-03-18 14:44:27.808527969 +0000 UTC m=+2651.937656426" lastFinishedPulling="2026-03-18 14:44:28.33609207 +0000 UTC m=+2652.465220527" observedRunningTime="2026-03-18 14:44:29.447304346 +0000 UTC m=+2653.576432803" watchObservedRunningTime="2026-03-18 14:44:29.466737936 +0000 UTC m=+2653.595866403" Mar 18 14:44:30 crc kubenswrapper[4857]: I0318 14:44:30.452694 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed"} Mar 18 14:44:37 crc kubenswrapper[4857]: I0318 14:44:37.817568 4857 generic.go:334] "Generic (PLEG): container finished" podID="71a9b71f-4dfc-49da-9953-2d1739ff480a" containerID="2aef29a64852f3fc8a15a380860a8e33fcb995aefe2f9aa897cc08980d47379c" exitCode=0 Mar 18 14:44:37 crc kubenswrapper[4857]: I0318 14:44:37.817614 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" event={"ID":"71a9b71f-4dfc-49da-9953-2d1739ff480a","Type":"ContainerDied","Data":"2aef29a64852f3fc8a15a380860a8e33fcb995aefe2f9aa897cc08980d47379c"} Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.725797 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.830102 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfjl2\" (UniqueName: \"kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2\") pod \"71a9b71f-4dfc-49da-9953-2d1739ff480a\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.830299 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam\") pod \"71a9b71f-4dfc-49da-9953-2d1739ff480a\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.830340 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0\") pod \"71a9b71f-4dfc-49da-9953-2d1739ff480a\" (UID: \"71a9b71f-4dfc-49da-9953-2d1739ff480a\") " Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.843866 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2" (OuterVolumeSpecName: "kube-api-access-pfjl2") pod "71a9b71f-4dfc-49da-9953-2d1739ff480a" (UID: "71a9b71f-4dfc-49da-9953-2d1739ff480a"). InnerVolumeSpecName "kube-api-access-pfjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.850938 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" event={"ID":"71a9b71f-4dfc-49da-9953-2d1739ff480a","Type":"ContainerDied","Data":"8070d4a5068890764ca9fb284ea8b87416640f8a789499c1f21d4a72060cb874"} Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.851269 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8070d4a5068890764ca9fb284ea8b87416640f8a789499c1f21d4a72060cb874" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.851283 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fkrm8" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.881513 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "71a9b71f-4dfc-49da-9953-2d1739ff480a" (UID: "71a9b71f-4dfc-49da-9953-2d1739ff480a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.903486 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71a9b71f-4dfc-49da-9953-2d1739ff480a" (UID: "71a9b71f-4dfc-49da-9953-2d1739ff480a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.934746 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfjl2\" (UniqueName: \"kubernetes.io/projected/71a9b71f-4dfc-49da-9953-2d1739ff480a-kube-api-access-pfjl2\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.934814 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.934830 4857 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71a9b71f-4dfc-49da-9953-2d1739ff480a-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.968091 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv"] Mar 18 14:44:39 crc kubenswrapper[4857]: E0318 14:44:39.968974 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a9b71f-4dfc-49da-9953-2d1739ff480a" containerName="ssh-known-hosts-edpm-deployment" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.968996 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a9b71f-4dfc-49da-9953-2d1739ff480a" containerName="ssh-known-hosts-edpm-deployment" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.969439 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a9b71f-4dfc-49da-9953-2d1739ff480a" containerName="ssh-known-hosts-edpm-deployment" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.970704 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:39 crc kubenswrapper[4857]: I0318 14:44:39.990878 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv"] Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.140930 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.141012 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.141521 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmqgb\" (UniqueName: \"kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.245031 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqgb\" (UniqueName: \"kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.245178 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.245213 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.248832 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.249068 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.264248 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqgb\" (UniqueName: \"kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bggjv\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.327631 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:40 crc kubenswrapper[4857]: I0318 14:44:40.969183 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv"] Mar 18 14:44:40 crc kubenswrapper[4857]: W0318 14:44:40.975938 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e3a693_0c83_4f91_94c2_9ea976d3cf90.slice/crio-65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1 WatchSource:0}: Error finding container 65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1: Status 404 returned error can't find the container with id 65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1 Mar 18 14:44:41 crc kubenswrapper[4857]: I0318 14:44:41.876377 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" event={"ID":"24e3a693-0c83-4f91-94c2-9ea976d3cf90","Type":"ContainerStarted","Data":"65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1"} Mar 18 14:44:45 crc kubenswrapper[4857]: I0318 14:44:45.448592 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" event={"ID":"24e3a693-0c83-4f91-94c2-9ea976d3cf90","Type":"ContainerStarted","Data":"85860efd8419d16546206a3d5f5859ce8513aab8d9e981c26621fcad64eea9bb"} Mar 18 14:44:45 crc kubenswrapper[4857]: I0318 14:44:45.472569 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" podStartSLOduration=3.709415901 podStartE2EDuration="6.472547877s" podCreationTimestamp="2026-03-18 14:44:39 +0000 UTC" firstStartedPulling="2026-03-18 14:44:40.981351453 +0000 UTC m=+2665.110479910" lastFinishedPulling="2026-03-18 14:44:43.744483399 +0000 UTC m=+2667.873611886" observedRunningTime="2026-03-18 14:44:45.467980512 +0000 UTC m=+2669.597108969" watchObservedRunningTime="2026-03-18 14:44:45.472547877 +0000 UTC m=+2669.601676334" Mar 18 14:44:53 crc kubenswrapper[4857]: I0318 14:44:53.691221 4857 generic.go:334] "Generic (PLEG): container finished" podID="24e3a693-0c83-4f91-94c2-9ea976d3cf90" containerID="85860efd8419d16546206a3d5f5859ce8513aab8d9e981c26621fcad64eea9bb" exitCode=0 Mar 18 14:44:53 crc kubenswrapper[4857]: I0318 14:44:53.691314 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" event={"ID":"24e3a693-0c83-4f91-94c2-9ea976d3cf90","Type":"ContainerDied","Data":"85860efd8419d16546206a3d5f5859ce8513aab8d9e981c26621fcad64eea9bb"} Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.487513 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.607620 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory\") pod \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.607669 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmqgb\" (UniqueName: \"kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb\") pod \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.607963 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam\") pod \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\" (UID: \"24e3a693-0c83-4f91-94c2-9ea976d3cf90\") " Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.617341 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb" (OuterVolumeSpecName: "kube-api-access-dmqgb") pod "24e3a693-0c83-4f91-94c2-9ea976d3cf90" (UID: "24e3a693-0c83-4f91-94c2-9ea976d3cf90"). InnerVolumeSpecName "kube-api-access-dmqgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.649662 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24e3a693-0c83-4f91-94c2-9ea976d3cf90" (UID: "24e3a693-0c83-4f91-94c2-9ea976d3cf90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.663317 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory" (OuterVolumeSpecName: "inventory") pod "24e3a693-0c83-4f91-94c2-9ea976d3cf90" (UID: "24e3a693-0c83-4f91-94c2-9ea976d3cf90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.711512 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.711548 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24e3a693-0c83-4f91-94c2-9ea976d3cf90-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.711562 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmqgb\" (UniqueName: \"kubernetes.io/projected/24e3a693-0c83-4f91-94c2-9ea976d3cf90-kube-api-access-dmqgb\") on node \"crc\" DevicePath \"\"" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.722993 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" event={"ID":"24e3a693-0c83-4f91-94c2-9ea976d3cf90","Type":"ContainerDied","Data":"65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1"} Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.723036 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65cb26de631300770889431fa16b77c217a8a598156ac2d678e330921d86eac1" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.723067 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bggjv" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.807448 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7"] Mar 18 14:44:55 crc kubenswrapper[4857]: E0318 14:44:55.808202 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e3a693-0c83-4f91-94c2-9ea976d3cf90" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.808228 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e3a693-0c83-4f91-94c2-9ea976d3cf90" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.808518 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e3a693-0c83-4f91-94c2-9ea976d3cf90" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.809767 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.818898 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.819110 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.819606 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.821351 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.834611 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7"] Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.931334 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.931713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64p9\" (UniqueName: \"kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:55 crc kubenswrapper[4857]: I0318 14:44:55.932032 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.035231 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.036151 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64p9\" (UniqueName: \"kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.036411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.040438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.043864 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.058139 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64p9\" (UniqueName: \"kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:56 crc kubenswrapper[4857]: I0318 14:44:56.458100 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:44:57 crc kubenswrapper[4857]: I0318 14:44:57.020835 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7"] Mar 18 14:44:57 crc kubenswrapper[4857]: I0318 14:44:57.748221 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" event={"ID":"c8a69a18-5407-48c4-bbc6-d60a5824e8db","Type":"ContainerStarted","Data":"dfb53ee5998c105b61b90ad766ada4a6e3469442fcd63959ca160ff110764acc"} Mar 18 14:44:58 crc kubenswrapper[4857]: I0318 14:44:58.759795 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" event={"ID":"c8a69a18-5407-48c4-bbc6-d60a5824e8db","Type":"ContainerStarted","Data":"61d41cd1ad5b8c08dccaddd5c6e2b8c24997e3a82f8da332373145f0792ca239"} Mar 18 14:44:58 crc kubenswrapper[4857]: I0318 14:44:58.784525 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" podStartSLOduration=3.279989132 podStartE2EDuration="3.784492663s" podCreationTimestamp="2026-03-18 14:44:55 +0000 UTC" firstStartedPulling="2026-03-18 14:44:57.029515689 +0000 UTC m=+2681.158644146" lastFinishedPulling="2026-03-18 14:44:57.53401916 +0000 UTC m=+2681.663147677" observedRunningTime="2026-03-18 14:44:58.779054756 +0000 UTC m=+2682.908183223" watchObservedRunningTime="2026-03-18 14:44:58.784492663 +0000 UTC m=+2682.913621130" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.145571 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg"] Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.148668 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.153498 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.154400 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.167584 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg"] Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.213595 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.214135 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvp4\" (UniqueName: \"kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.214331 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.316071 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.316218 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrvp4\" (UniqueName: \"kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.316327 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.317207 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.322932 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.334302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrvp4\" (UniqueName: \"kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4\") pod \"collect-profiles-29564085-knzbg\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:00 crc kubenswrapper[4857]: I0318 14:45:00.477669 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:01 crc kubenswrapper[4857]: I0318 14:45:01.192882 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg"] Mar 18 14:45:01 crc kubenswrapper[4857]: I0318 14:45:01.885046 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" event={"ID":"902750ed-a1ec-4bc5-a25b-de87bab4b407","Type":"ContainerStarted","Data":"29bbbae86d50334373071fe5c9c5865d9d59a37fa75d8574e2e48bb1feac8399"} Mar 18 14:45:01 crc kubenswrapper[4857]: I0318 14:45:01.885271 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" event={"ID":"902750ed-a1ec-4bc5-a25b-de87bab4b407","Type":"ContainerStarted","Data":"5ebcea1550665a6e94996ecab8c83338424f3225531663b1eb8bbd7edcf360b7"} Mar 18 14:45:01 crc kubenswrapper[4857]: I0318 14:45:01.915412 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" podStartSLOduration=1.915378773 podStartE2EDuration="1.915378773s" podCreationTimestamp="2026-03-18 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 14:45:01.90770131 +0000 UTC m=+2686.036829767" watchObservedRunningTime="2026-03-18 14:45:01.915378773 +0000 UTC m=+2686.044507240" Mar 18 14:45:02 crc kubenswrapper[4857]: I0318 14:45:02.897377 4857 generic.go:334] "Generic (PLEG): container finished" podID="902750ed-a1ec-4bc5-a25b-de87bab4b407" containerID="29bbbae86d50334373071fe5c9c5865d9d59a37fa75d8574e2e48bb1feac8399" exitCode=0 Mar 18 14:45:02 crc kubenswrapper[4857]: I0318 14:45:02.897495 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" event={"ID":"902750ed-a1ec-4bc5-a25b-de87bab4b407","Type":"ContainerDied","Data":"29bbbae86d50334373071fe5c9c5865d9d59a37fa75d8574e2e48bb1feac8399"} Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.392187 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.538333 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume\") pod \"902750ed-a1ec-4bc5-a25b-de87bab4b407\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.538646 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume\") pod \"902750ed-a1ec-4bc5-a25b-de87bab4b407\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.538717 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrvp4\" (UniqueName: \"kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4\") pod \"902750ed-a1ec-4bc5-a25b-de87bab4b407\" (UID: \"902750ed-a1ec-4bc5-a25b-de87bab4b407\") " Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.539419 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume" (OuterVolumeSpecName: "config-volume") pod "902750ed-a1ec-4bc5-a25b-de87bab4b407" (UID: "902750ed-a1ec-4bc5-a25b-de87bab4b407"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.544541 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "902750ed-a1ec-4bc5-a25b-de87bab4b407" (UID: "902750ed-a1ec-4bc5-a25b-de87bab4b407"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.545051 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4" (OuterVolumeSpecName: "kube-api-access-xrvp4") pod "902750ed-a1ec-4bc5-a25b-de87bab4b407" (UID: "902750ed-a1ec-4bc5-a25b-de87bab4b407"). InnerVolumeSpecName "kube-api-access-xrvp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.642217 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/902750ed-a1ec-4bc5-a25b-de87bab4b407-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.642271 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/902750ed-a1ec-4bc5-a25b-de87bab4b407-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.642286 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrvp4\" (UniqueName: \"kubernetes.io/projected/902750ed-a1ec-4bc5-a25b-de87bab4b407-kube-api-access-xrvp4\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.938796 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" event={"ID":"902750ed-a1ec-4bc5-a25b-de87bab4b407","Type":"ContainerDied","Data":"5ebcea1550665a6e94996ecab8c83338424f3225531663b1eb8bbd7edcf360b7"} Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.938841 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebcea1550665a6e94996ecab8c83338424f3225531663b1eb8bbd7edcf360b7" Mar 18 14:45:04 crc kubenswrapper[4857]: I0318 14:45:04.938906 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg" Mar 18 14:45:05 crc kubenswrapper[4857]: I0318 14:45:05.492268 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t"] Mar 18 14:45:05 crc kubenswrapper[4857]: I0318 14:45:05.504822 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564040-b8w4t"] Mar 18 14:45:07 crc kubenswrapper[4857]: I0318 14:45:07.189946 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d067c327-e7cb-4fbc-a54f-4ac7bd9c7825" path="/var/lib/kubelet/pods/d067c327-e7cb-4fbc-a54f-4ac7bd9c7825/volumes" Mar 18 14:45:07 crc kubenswrapper[4857]: I0318 14:45:07.983847 4857 generic.go:334] "Generic (PLEG): container finished" podID="c8a69a18-5407-48c4-bbc6-d60a5824e8db" containerID="61d41cd1ad5b8c08dccaddd5c6e2b8c24997e3a82f8da332373145f0792ca239" exitCode=0 Mar 18 14:45:07 crc kubenswrapper[4857]: I0318 14:45:07.983930 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" event={"ID":"c8a69a18-5407-48c4-bbc6-d60a5824e8db","Type":"ContainerDied","Data":"61d41cd1ad5b8c08dccaddd5c6e2b8c24997e3a82f8da332373145f0792ca239"} Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.733870 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.825714 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam\") pod \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.825921 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f64p9\" (UniqueName: \"kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9\") pod \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.826004 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory\") pod \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\" (UID: \"c8a69a18-5407-48c4-bbc6-d60a5824e8db\") " Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.834226 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9" (OuterVolumeSpecName: "kube-api-access-f64p9") pod "c8a69a18-5407-48c4-bbc6-d60a5824e8db" (UID: "c8a69a18-5407-48c4-bbc6-d60a5824e8db"). InnerVolumeSpecName "kube-api-access-f64p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.863147 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c8a69a18-5407-48c4-bbc6-d60a5824e8db" (UID: "c8a69a18-5407-48c4-bbc6-d60a5824e8db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:45:09 crc kubenswrapper[4857]: I0318 14:45:09.863577 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory" (OuterVolumeSpecName: "inventory") pod "c8a69a18-5407-48c4-bbc6-d60a5824e8db" (UID: "c8a69a18-5407-48c4-bbc6-d60a5824e8db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.008141 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.008207 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f64p9\" (UniqueName: \"kubernetes.io/projected/c8a69a18-5407-48c4-bbc6-d60a5824e8db-kube-api-access-f64p9\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.008221 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8a69a18-5407-48c4-bbc6-d60a5824e8db-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.129349 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" event={"ID":"c8a69a18-5407-48c4-bbc6-d60a5824e8db","Type":"ContainerDied","Data":"dfb53ee5998c105b61b90ad766ada4a6e3469442fcd63959ca160ff110764acc"} Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.129417 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb53ee5998c105b61b90ad766ada4a6e3469442fcd63959ca160ff110764acc" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.129477 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.471336 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf"] Mar 18 14:45:10 crc kubenswrapper[4857]: E0318 14:45:10.473498 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902750ed-a1ec-4bc5-a25b-de87bab4b407" containerName="collect-profiles" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.473522 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="902750ed-a1ec-4bc5-a25b-de87bab4b407" containerName="collect-profiles" Mar 18 14:45:10 crc kubenswrapper[4857]: E0318 14:45:10.473539 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a69a18-5407-48c4-bbc6-d60a5824e8db" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.473546 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a69a18-5407-48c4-bbc6-d60a5824e8db" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.473829 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a69a18-5407-48c4-bbc6-d60a5824e8db" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.473855 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="902750ed-a1ec-4bc5-a25b-de87bab4b407" containerName="collect-profiles" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.474851 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.479242 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.479444 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.481304 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.481776 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.481940 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.482053 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.482083 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.482199 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.482406 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.509802 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf"] Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634318 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634376 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634412 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634441 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634808 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634930 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.634973 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635026 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635051 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635109 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635219 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635251 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635286 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635532 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635582 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.635771 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2qr\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738610 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2qr\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738734 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738777 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738801 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738856 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738928 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738954 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.738978 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739051 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739098 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739167 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739239 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739270 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739317 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739401 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.739440 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.744723 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.745244 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.745790 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.746895 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.748286 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.748575 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.748857 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.748888 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.749288 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.749585 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.752325 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.752595 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.752816 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.753481 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.755445 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.759008 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2qr\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:10 crc kubenswrapper[4857]: I0318 14:45:10.804606 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:45:11 crc kubenswrapper[4857]: I0318 14:45:11.398981 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf"] Mar 18 14:45:11 crc kubenswrapper[4857]: I0318 14:45:11.406840 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:45:12 crc kubenswrapper[4857]: I0318 14:45:12.161119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" event={"ID":"285cebdc-6e07-4290-84bd-37fe6df151e4","Type":"ContainerStarted","Data":"57c0e29472c9ec48d38a4237b2e263770d9b564e97fe2a347a7e22d8ef5627a1"} Mar 18 14:45:13 crc kubenswrapper[4857]: I0318 14:45:13.038331 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-8pl4z"] Mar 18 14:45:13 crc kubenswrapper[4857]: I0318 14:45:13.049945 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-8pl4z"] Mar 18 14:45:13 crc kubenswrapper[4857]: I0318 14:45:13.185913 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8981fabd-a063-4094-8843-f2f8190b1a50" path="/var/lib/kubelet/pods/8981fabd-a063-4094-8843-f2f8190b1a50/volumes" Mar 18 14:45:13 crc kubenswrapper[4857]: I0318 14:45:13.191591 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" event={"ID":"285cebdc-6e07-4290-84bd-37fe6df151e4","Type":"ContainerStarted","Data":"cd7ba89eaab3f5eca900b32e37cd81c189301943edada29c4a107e4683f323f9"} Mar 18 14:45:13 crc kubenswrapper[4857]: I0318 14:45:13.237096 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" podStartSLOduration=2.3955692490000002 podStartE2EDuration="3.23706814s" podCreationTimestamp="2026-03-18 14:45:10 +0000 UTC" firstStartedPulling="2026-03-18 14:45:11.406480009 +0000 UTC m=+2695.535608486" lastFinishedPulling="2026-03-18 14:45:12.24797891 +0000 UTC m=+2696.377107377" observedRunningTime="2026-03-18 14:45:13.219490398 +0000 UTC m=+2697.348618865" watchObservedRunningTime="2026-03-18 14:45:13.23706814 +0000 UTC m=+2697.366196607" Mar 18 14:45:14 crc kubenswrapper[4857]: I0318 14:45:14.418856 4857 scope.go:117] "RemoveContainer" containerID="32c278b4a84ea9646703fec60b40298d92c03616423ee8a6e884fcd6ce7b93ac" Mar 18 14:45:14 crc kubenswrapper[4857]: I0318 14:45:14.486876 4857 scope.go:117] "RemoveContainer" containerID="4e3088ed0528fc9d50a08ac061c0a4e2c3cbfa7deb926a3d8e87ceab8021f9ec" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.165966 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564086-kd45d"] Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.173894 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.179839 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.180222 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.187226 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.199824 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564086-kd45d"] Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.303983 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7zz\" (UniqueName: \"kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz\") pod \"auto-csr-approver-29564086-kd45d\" (UID: \"363aabfa-9ff9-4f1f-bed5-05790896082a\") " pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.409378 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7zz\" (UniqueName: \"kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz\") pod \"auto-csr-approver-29564086-kd45d\" (UID: \"363aabfa-9ff9-4f1f-bed5-05790896082a\") " pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.430656 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7zz\" (UniqueName: \"kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz\") pod \"auto-csr-approver-29564086-kd45d\" (UID: \"363aabfa-9ff9-4f1f-bed5-05790896082a\") " pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:00 crc kubenswrapper[4857]: I0318 14:46:00.515510 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:01 crc kubenswrapper[4857]: I0318 14:46:01.295851 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564086-kd45d"] Mar 18 14:46:01 crc kubenswrapper[4857]: I0318 14:46:01.721881 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564086-kd45d" event={"ID":"363aabfa-9ff9-4f1f-bed5-05790896082a","Type":"ContainerStarted","Data":"693560d42dcc2dfdd937c3cb655ed4356446900b155123543be4364081fdcd94"} Mar 18 14:46:03 crc kubenswrapper[4857]: I0318 14:46:03.747665 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564086-kd45d" event={"ID":"363aabfa-9ff9-4f1f-bed5-05790896082a","Type":"ContainerStarted","Data":"07d4ba9378bea97ab4cebcc2d0a8190b46f7f1ea28cd72523c23180eb69aeedf"} Mar 18 14:46:03 crc kubenswrapper[4857]: I0318 14:46:03.778170 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564086-kd45d" podStartSLOduration=2.103280526 podStartE2EDuration="3.778121602s" podCreationTimestamp="2026-03-18 14:46:00 +0000 UTC" firstStartedPulling="2026-03-18 14:46:01.319260403 +0000 UTC m=+2745.448388860" lastFinishedPulling="2026-03-18 14:46:02.994101479 +0000 UTC m=+2747.123229936" observedRunningTime="2026-03-18 14:46:03.763340769 +0000 UTC m=+2747.892469236" watchObservedRunningTime="2026-03-18 14:46:03.778121602 +0000 UTC m=+2747.907250079" Mar 18 14:46:04 crc kubenswrapper[4857]: I0318 14:46:04.767006 4857 generic.go:334] "Generic (PLEG): container finished" podID="363aabfa-9ff9-4f1f-bed5-05790896082a" containerID="07d4ba9378bea97ab4cebcc2d0a8190b46f7f1ea28cd72523c23180eb69aeedf" exitCode=0 Mar 18 14:46:04 crc kubenswrapper[4857]: I0318 14:46:04.767560 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564086-kd45d" event={"ID":"363aabfa-9ff9-4f1f-bed5-05790896082a","Type":"ContainerDied","Data":"07d4ba9378bea97ab4cebcc2d0a8190b46f7f1ea28cd72523c23180eb69aeedf"} Mar 18 14:46:05 crc kubenswrapper[4857]: I0318 14:46:05.791184 4857 generic.go:334] "Generic (PLEG): container finished" podID="285cebdc-6e07-4290-84bd-37fe6df151e4" containerID="cd7ba89eaab3f5eca900b32e37cd81c189301943edada29c4a107e4683f323f9" exitCode=0 Mar 18 14:46:05 crc kubenswrapper[4857]: I0318 14:46:05.791449 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" event={"ID":"285cebdc-6e07-4290-84bd-37fe6df151e4","Type":"ContainerDied","Data":"cd7ba89eaab3f5eca900b32e37cd81c189301943edada29c4a107e4683f323f9"} Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.245872 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.562387 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7zz\" (UniqueName: \"kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz\") pod \"363aabfa-9ff9-4f1f-bed5-05790896082a\" (UID: \"363aabfa-9ff9-4f1f-bed5-05790896082a\") " Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.581509 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz" (OuterVolumeSpecName: "kube-api-access-dp7zz") pod "363aabfa-9ff9-4f1f-bed5-05790896082a" (UID: "363aabfa-9ff9-4f1f-bed5-05790896082a"). InnerVolumeSpecName "kube-api-access-dp7zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.666177 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7zz\" (UniqueName: \"kubernetes.io/projected/363aabfa-9ff9-4f1f-bed5-05790896082a-kube-api-access-dp7zz\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.807671 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564086-kd45d" event={"ID":"363aabfa-9ff9-4f1f-bed5-05790896082a","Type":"ContainerDied","Data":"693560d42dcc2dfdd937c3cb655ed4356446900b155123543be4364081fdcd94"} Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.807781 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693560d42dcc2dfdd937c3cb655ed4356446900b155123543be4364081fdcd94" Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.807806 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564086-kd45d" Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.865772 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564080-xt28k"] Mar 18 14:46:06 crc kubenswrapper[4857]: I0318 14:46:06.891680 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564080-xt28k"] Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.178560 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f08f18-917e-413d-abfe-6ab1006a460d" path="/var/lib/kubelet/pods/56f08f18-917e-413d-abfe-6ab1006a460d/volumes" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.326054 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488108 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488265 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488298 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488345 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488370 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488392 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488418 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488495 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488524 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488558 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488599 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488674 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488733 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488776 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488806 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.488833 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f2qr\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr\") pod \"285cebdc-6e07-4290-84bd-37fe6df151e4\" (UID: \"285cebdc-6e07-4290-84bd-37fe6df151e4\") " Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.496276 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.496407 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr" (OuterVolumeSpecName: "kube-api-access-2f2qr") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "kube-api-access-2f2qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.496412 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.497324 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.499034 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.500717 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.502939 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.502950 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.503429 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.504120 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.504742 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.505591 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.506615 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.513661 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.534948 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory" (OuterVolumeSpecName: "inventory") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.536133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "285cebdc-6e07-4290-84bd-37fe6df151e4" (UID: "285cebdc-6e07-4290-84bd-37fe6df151e4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592663 4857 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592713 4857 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592732 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592776 4857 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592793 4857 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592808 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592823 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f2qr\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-kube-api-access-2f2qr\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592839 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592851 4857 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592862 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592873 4857 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592885 4857 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592899 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592912 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592929 4857 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/285cebdc-6e07-4290-84bd-37fe6df151e4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.592941 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/285cebdc-6e07-4290-84bd-37fe6df151e4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.830322 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" event={"ID":"285cebdc-6e07-4290-84bd-37fe6df151e4","Type":"ContainerDied","Data":"57c0e29472c9ec48d38a4237b2e263770d9b564e97fe2a347a7e22d8ef5627a1"} Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.830373 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c0e29472c9ec48d38a4237b2e263770d9b564e97fe2a347a7e22d8ef5627a1" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.830455 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.965713 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj"] Mar 18 14:46:07 crc kubenswrapper[4857]: E0318 14:46:07.966643 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363aabfa-9ff9-4f1f-bed5-05790896082a" containerName="oc" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.966679 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="363aabfa-9ff9-4f1f-bed5-05790896082a" containerName="oc" Mar 18 14:46:07 crc kubenswrapper[4857]: E0318 14:46:07.966791 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285cebdc-6e07-4290-84bd-37fe6df151e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.966802 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="285cebdc-6e07-4290-84bd-37fe6df151e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.967120 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="363aabfa-9ff9-4f1f-bed5-05790896082a" containerName="oc" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.967146 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="285cebdc-6e07-4290-84bd-37fe6df151e4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.968385 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.974452 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.974834 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.975087 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.975269 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.976280 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:46:07 crc kubenswrapper[4857]: I0318 14:46:07.987612 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj"] Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.118724 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.119123 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.119209 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.119282 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbtc\" (UniqueName: \"kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.119388 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.221782 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhbtc\" (UniqueName: \"kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.221949 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.222155 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.222202 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.222300 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.223793 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.227236 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.227363 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.227433 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.239451 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhbtc\" (UniqueName: \"kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-847zj\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.289656 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:46:08 crc kubenswrapper[4857]: I0318 14:46:08.886044 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj"] Mar 18 14:46:09 crc kubenswrapper[4857]: I0318 14:46:09.851613 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" event={"ID":"62857fb3-1258-4014-9345-dfd35035f61f","Type":"ContainerStarted","Data":"c90f7ee840c6eb3ef27b64fb5cea342ae1a9064578f5afb4a03642293123caa1"} Mar 18 14:46:11 crc kubenswrapper[4857]: I0318 14:46:11.096705 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" event={"ID":"62857fb3-1258-4014-9345-dfd35035f61f","Type":"ContainerStarted","Data":"201f3a2f3ca4ae90c122ebe3a2e080e6c9ee794263c837fc91270a987a96671f"} Mar 18 14:46:11 crc kubenswrapper[4857]: I0318 14:46:11.127305 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" podStartSLOduration=3.44276762 podStartE2EDuration="4.127270795s" podCreationTimestamp="2026-03-18 14:46:07 +0000 UTC" firstStartedPulling="2026-03-18 14:46:08.904628739 +0000 UTC m=+2753.033757196" lastFinishedPulling="2026-03-18 14:46:09.589131914 +0000 UTC m=+2753.718260371" observedRunningTime="2026-03-18 14:46:11.121856579 +0000 UTC m=+2755.250985046" watchObservedRunningTime="2026-03-18 14:46:11.127270795 +0000 UTC m=+2755.256399252" Mar 18 14:46:14 crc kubenswrapper[4857]: I0318 14:46:14.613976 4857 scope.go:117] "RemoveContainer" containerID="562b96ed7f135d1af671e3c0be443ca94c2239a29fe01b878b00d23df93d5f9e" Mar 18 14:46:57 crc kubenswrapper[4857]: I0318 14:46:57.038935 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:46:57 crc kubenswrapper[4857]: I0318 14:46:57.039556 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:47:15 crc kubenswrapper[4857]: I0318 14:47:15.629564 4857 generic.go:334] "Generic (PLEG): container finished" podID="62857fb3-1258-4014-9345-dfd35035f61f" containerID="201f3a2f3ca4ae90c122ebe3a2e080e6c9ee794263c837fc91270a987a96671f" exitCode=0 Mar 18 14:47:15 crc kubenswrapper[4857]: I0318 14:47:15.629653 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" event={"ID":"62857fb3-1258-4014-9345-dfd35035f61f","Type":"ContainerDied","Data":"201f3a2f3ca4ae90c122ebe3a2e080e6c9ee794263c837fc91270a987a96671f"} Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.151243 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.296477 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0\") pod \"62857fb3-1258-4014-9345-dfd35035f61f\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.297508 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam\") pod \"62857fb3-1258-4014-9345-dfd35035f61f\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.297704 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle\") pod \"62857fb3-1258-4014-9345-dfd35035f61f\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.297909 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhbtc\" (UniqueName: \"kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc\") pod \"62857fb3-1258-4014-9345-dfd35035f61f\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.298026 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory\") pod \"62857fb3-1258-4014-9345-dfd35035f61f\" (UID: \"62857fb3-1258-4014-9345-dfd35035f61f\") " Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.318670 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "62857fb3-1258-4014-9345-dfd35035f61f" (UID: "62857fb3-1258-4014-9345-dfd35035f61f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.355113 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc" (OuterVolumeSpecName: "kube-api-access-bhbtc") pod "62857fb3-1258-4014-9345-dfd35035f61f" (UID: "62857fb3-1258-4014-9345-dfd35035f61f"). InnerVolumeSpecName "kube-api-access-bhbtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.355597 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62857fb3-1258-4014-9345-dfd35035f61f" (UID: "62857fb3-1258-4014-9345-dfd35035f61f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.367834 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "62857fb3-1258-4014-9345-dfd35035f61f" (UID: "62857fb3-1258-4014-9345-dfd35035f61f"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.402456 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory" (OuterVolumeSpecName: "inventory") pod "62857fb3-1258-4014-9345-dfd35035f61f" (UID: "62857fb3-1258-4014-9345-dfd35035f61f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.405836 4857 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/62857fb3-1258-4014-9345-dfd35035f61f-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.405874 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.405888 4857 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.405900 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhbtc\" (UniqueName: \"kubernetes.io/projected/62857fb3-1258-4014-9345-dfd35035f61f-kube-api-access-bhbtc\") on node \"crc\" DevicePath \"\"" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.405912 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62857fb3-1258-4014-9345-dfd35035f61f-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.657854 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" event={"ID":"62857fb3-1258-4014-9345-dfd35035f61f","Type":"ContainerDied","Data":"c90f7ee840c6eb3ef27b64fb5cea342ae1a9064578f5afb4a03642293123caa1"} Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.658317 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90f7ee840c6eb3ef27b64fb5cea342ae1a9064578f5afb4a03642293123caa1" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.657956 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-847zj" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.777618 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc"] Mar 18 14:47:17 crc kubenswrapper[4857]: E0318 14:47:17.778268 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62857fb3-1258-4014-9345-dfd35035f61f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.778294 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="62857fb3-1258-4014-9345-dfd35035f61f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.778601 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="62857fb3-1258-4014-9345-dfd35035f61f" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.779852 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.783036 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.783106 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.783159 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.784147 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.784644 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.784976 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.792185 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc"] Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.936382 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.937163 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.937215 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.937912 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.937961 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmjt\" (UniqueName: \"kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:17 crc kubenswrapper[4857]: I0318 14:47:17.938014 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.043560 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.043633 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmjt\" (UniqueName: \"kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.043715 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.043888 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.044048 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.044096 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.050073 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.050098 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.050574 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.068844 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.073466 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.085133 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmjt\" (UniqueName: \"kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.147423 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:47:18 crc kubenswrapper[4857]: I0318 14:47:18.767777 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc"] Mar 18 14:47:19 crc kubenswrapper[4857]: I0318 14:47:19.717907 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" event={"ID":"ed495323-60c5-4ea1-8990-0d4c3910b7ac","Type":"ContainerStarted","Data":"5ee057ac0728f78b4e5f203274c940881d349fa9d791d1f94397c2e701f6a820"} Mar 18 14:47:20 crc kubenswrapper[4857]: I0318 14:47:20.732843 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" event={"ID":"ed495323-60c5-4ea1-8990-0d4c3910b7ac","Type":"ContainerStarted","Data":"9cffe8fe85a07351ade5c5b9be0c51a7be168840d03f3454e6481e5e68524800"} Mar 18 14:47:20 crc kubenswrapper[4857]: I0318 14:47:20.763935 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" podStartSLOduration=3.069999188 podStartE2EDuration="3.763897681s" podCreationTimestamp="2026-03-18 14:47:17 +0000 UTC" firstStartedPulling="2026-03-18 14:47:18.780772616 +0000 UTC m=+2822.909901073" lastFinishedPulling="2026-03-18 14:47:19.474671069 +0000 UTC m=+2823.603799566" observedRunningTime="2026-03-18 14:47:20.757484019 +0000 UTC m=+2824.886612486" watchObservedRunningTime="2026-03-18 14:47:20.763897681 +0000 UTC m=+2824.893026168" Mar 18 14:47:27 crc kubenswrapper[4857]: I0318 14:47:27.038641 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:47:27 crc kubenswrapper[4857]: I0318 14:47:27.039237 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.111167 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.115150 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.128209 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.226533 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.226692 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfrd\" (UniqueName: \"kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.226984 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.329807 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcfrd\" (UniqueName: \"kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.330323 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.330489 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.330943 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.330970 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.354859 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcfrd\" (UniqueName: \"kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd\") pod \"redhat-operators-sxmw7\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:34 crc kubenswrapper[4857]: I0318 14:47:34.452131 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:35 crc kubenswrapper[4857]: W0318 14:47:35.021381 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b WatchSource:0}: Error finding container bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b: Status 404 returned error can't find the container with id bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b Mar 18 14:47:35 crc kubenswrapper[4857]: I0318 14:47:35.031526 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:47:35 crc kubenswrapper[4857]: I0318 14:47:35.912783 4857 generic.go:334] "Generic (PLEG): container finished" podID="735e535a-bce1-491b-812a-944c2232d8bf" containerID="0703ad3665df2dbeb2ed979dfe1037c8853fcdcb88c42059e54f4c9a1de04b4c" exitCode=0 Mar 18 14:47:35 crc kubenswrapper[4857]: I0318 14:47:35.912880 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerDied","Data":"0703ad3665df2dbeb2ed979dfe1037c8853fcdcb88c42059e54f4c9a1de04b4c"} Mar 18 14:47:35 crc kubenswrapper[4857]: I0318 14:47:35.913144 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerStarted","Data":"bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b"} Mar 18 14:47:37 crc kubenswrapper[4857]: I0318 14:47:37.939685 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerStarted","Data":"0759f6761c278fcc7ca669ad05fe18dc1167ab4980891414447668b7558a49c2"} Mar 18 14:47:46 crc kubenswrapper[4857]: I0318 14:47:46.042853 4857 generic.go:334] "Generic (PLEG): container finished" podID="735e535a-bce1-491b-812a-944c2232d8bf" containerID="0759f6761c278fcc7ca669ad05fe18dc1167ab4980891414447668b7558a49c2" exitCode=0 Mar 18 14:47:46 crc kubenswrapper[4857]: I0318 14:47:46.042940 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerDied","Data":"0759f6761c278fcc7ca669ad05fe18dc1167ab4980891414447668b7558a49c2"} Mar 18 14:47:47 crc kubenswrapper[4857]: I0318 14:47:47.057770 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerStarted","Data":"a7f87a3d466f935136565358361d9239bcb1f9a606c0621984a368e08f5d061d"} Mar 18 14:47:47 crc kubenswrapper[4857]: I0318 14:47:47.079704 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sxmw7" podStartSLOduration=2.5173383400000002 podStartE2EDuration="13.079671045s" podCreationTimestamp="2026-03-18 14:47:34 +0000 UTC" firstStartedPulling="2026-03-18 14:47:35.929510991 +0000 UTC m=+2840.058639448" lastFinishedPulling="2026-03-18 14:47:46.491843686 +0000 UTC m=+2850.620972153" observedRunningTime="2026-03-18 14:47:47.074800523 +0000 UTC m=+2851.203928980" watchObservedRunningTime="2026-03-18 14:47:47.079671045 +0000 UTC m=+2851.208799512" Mar 18 14:47:54 crc kubenswrapper[4857]: I0318 14:47:54.452586 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:54 crc kubenswrapper[4857]: I0318 14:47:54.453195 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:47:55 crc kubenswrapper[4857]: I0318 14:47:55.512985 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxmw7" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" probeResult="failure" output=< Mar 18 14:47:55 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:47:55 crc kubenswrapper[4857]: > Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.091303 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.091572 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.091651 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.108952 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.109067 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed" gracePeriod=600 Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.341790 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed" exitCode=0 Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.342100 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed"} Mar 18 14:47:57 crc kubenswrapper[4857]: I0318 14:47:57.342684 4857 scope.go:117] "RemoveContainer" containerID="12743c39c2d2cf9bdaafcbb29882466d763b514f536153c4d6236f669f579ff9" Mar 18 14:47:58 crc kubenswrapper[4857]: I0318 14:47:58.418789 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df"} Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.161083 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564088-xfswd"] Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.163586 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.170889 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.171615 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.171912 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.178496 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564088-xfswd"] Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.337677 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlh2p\" (UniqueName: \"kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p\") pod \"auto-csr-approver-29564088-xfswd\" (UID: \"d136ecf4-b974-463f-acd4-bda38ec47748\") " pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.442027 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlh2p\" (UniqueName: \"kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p\") pod \"auto-csr-approver-29564088-xfswd\" (UID: \"d136ecf4-b974-463f-acd4-bda38ec47748\") " pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.471588 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlh2p\" (UniqueName: \"kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p\") pod \"auto-csr-approver-29564088-xfswd\" (UID: \"d136ecf4-b974-463f-acd4-bda38ec47748\") " pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:00 crc kubenswrapper[4857]: I0318 14:48:00.504210 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:01 crc kubenswrapper[4857]: I0318 14:48:01.538108 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564088-xfswd"] Mar 18 14:48:01 crc kubenswrapper[4857]: W0318 14:48:01.548558 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd136ecf4_b974_463f_acd4_bda38ec47748.slice/crio-8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b WatchSource:0}: Error finding container 8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b: Status 404 returned error can't find the container with id 8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b Mar 18 14:48:02 crc kubenswrapper[4857]: I0318 14:48:02.469780 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564088-xfswd" event={"ID":"d136ecf4-b974-463f-acd4-bda38ec47748","Type":"ContainerStarted","Data":"8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b"} Mar 18 14:48:03 crc kubenswrapper[4857]: I0318 14:48:03.486110 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564088-xfswd" event={"ID":"d136ecf4-b974-463f-acd4-bda38ec47748","Type":"ContainerStarted","Data":"52eb4a34aad31b4e0902f5ac8ac0ef6b5c5eecb5fabf091e392b9eb8ea2b24e3"} Mar 18 14:48:03 crc kubenswrapper[4857]: I0318 14:48:03.514886 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564088-xfswd" podStartSLOduration=2.507775101 podStartE2EDuration="3.514859323s" podCreationTimestamp="2026-03-18 14:48:00 +0000 UTC" firstStartedPulling="2026-03-18 14:48:01.553439987 +0000 UTC m=+2865.682568444" lastFinishedPulling="2026-03-18 14:48:02.560524209 +0000 UTC m=+2866.689652666" observedRunningTime="2026-03-18 14:48:03.501667471 +0000 UTC m=+2867.630795928" watchObservedRunningTime="2026-03-18 14:48:03.514859323 +0000 UTC m=+2867.643987940" Mar 18 14:48:04 crc kubenswrapper[4857]: I0318 14:48:04.544291 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564088-xfswd" event={"ID":"d136ecf4-b974-463f-acd4-bda38ec47748","Type":"ContainerDied","Data":"52eb4a34aad31b4e0902f5ac8ac0ef6b5c5eecb5fabf091e392b9eb8ea2b24e3"} Mar 18 14:48:04 crc kubenswrapper[4857]: I0318 14:48:04.544116 4857 generic.go:334] "Generic (PLEG): container finished" podID="d136ecf4-b974-463f-acd4-bda38ec47748" containerID="52eb4a34aad31b4e0902f5ac8ac0ef6b5c5eecb5fabf091e392b9eb8ea2b24e3" exitCode=0 Mar 18 14:48:05 crc kubenswrapper[4857]: I0318 14:48:05.557928 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sxmw7" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" probeResult="failure" output=< Mar 18 14:48:05 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:48:05 crc kubenswrapper[4857]: > Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.311002 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.426631 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlh2p\" (UniqueName: \"kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p\") pod \"d136ecf4-b974-463f-acd4-bda38ec47748\" (UID: \"d136ecf4-b974-463f-acd4-bda38ec47748\") " Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.438067 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p" (OuterVolumeSpecName: "kube-api-access-wlh2p") pod "d136ecf4-b974-463f-acd4-bda38ec47748" (UID: "d136ecf4-b974-463f-acd4-bda38ec47748"). InnerVolumeSpecName "kube-api-access-wlh2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.529957 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlh2p\" (UniqueName: \"kubernetes.io/projected/d136ecf4-b974-463f-acd4-bda38ec47748-kube-api-access-wlh2p\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.585271 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564088-xfswd" event={"ID":"d136ecf4-b974-463f-acd4-bda38ec47748","Type":"ContainerDied","Data":"8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b"} Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.585333 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8172a64ab214b6eede5b884211f4ee952874c35863fd30b2fd04b89b7401230b" Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.585369 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564088-xfswd" Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.593001 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564082-52c5z"] Mar 18 14:48:06 crc kubenswrapper[4857]: I0318 14:48:06.605360 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564082-52c5z"] Mar 18 14:48:07 crc kubenswrapper[4857]: I0318 14:48:07.190307 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac8ea03-7f51-4f69-ba7d-c4cf41769b48" path="/var/lib/kubelet/pods/5ac8ea03-7f51-4f69-ba7d-c4cf41769b48/volumes" Mar 18 14:48:13 crc kubenswrapper[4857]: I0318 14:48:13.671828 4857 generic.go:334] "Generic (PLEG): container finished" podID="ed495323-60c5-4ea1-8990-0d4c3910b7ac" containerID="9cffe8fe85a07351ade5c5b9be0c51a7be168840d03f3454e6481e5e68524800" exitCode=0 Mar 18 14:48:13 crc kubenswrapper[4857]: I0318 14:48:13.671943 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" event={"ID":"ed495323-60c5-4ea1-8990-0d4c3910b7ac","Type":"ContainerDied","Data":"9cffe8fe85a07351ade5c5b9be0c51a7be168840d03f3454e6481e5e68524800"} Mar 18 14:48:14 crc kubenswrapper[4857]: I0318 14:48:14.515287 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:48:14 crc kubenswrapper[4857]: I0318 14:48:14.574442 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:48:14 crc kubenswrapper[4857]: I0318 14:48:14.766505 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:48:14 crc kubenswrapper[4857]: I0318 14:48:14.782639 4857 scope.go:117] "RemoveContainer" containerID="6864d20b874427a334b87984c21b401cc9f1609c6e00009c962fa3f26bf65a02" Mar 18 14:48:15 crc kubenswrapper[4857]: I0318 14:48:15.711651 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sxmw7" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" containerID="cri-o://a7f87a3d466f935136565358361d9239bcb1f9a606c0621984a368e08f5d061d" gracePeriod=2 Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.567853 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.676271 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.676626 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.676789 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.676862 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.676928 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwmjt\" (UniqueName: \"kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.677001 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0\") pod \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\" (UID: \"ed495323-60c5-4ea1-8990-0d4c3910b7ac\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.682294 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.682860 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt" (OuterVolumeSpecName: "kube-api-access-wwmjt") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "kube-api-access-wwmjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.714517 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.717319 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.719115 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory" (OuterVolumeSpecName: "inventory") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.722117 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "ed495323-60c5-4ea1-8990-0d4c3910b7ac" (UID: "ed495323-60c5-4ea1-8990-0d4c3910b7ac"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.724817 4857 generic.go:334] "Generic (PLEG): container finished" podID="735e535a-bce1-491b-812a-944c2232d8bf" containerID="a7f87a3d466f935136565358361d9239bcb1f9a606c0621984a368e08f5d061d" exitCode=0 Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.724883 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerDied","Data":"a7f87a3d466f935136565358361d9239bcb1f9a606c0621984a368e08f5d061d"} Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.724914 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sxmw7" event={"ID":"735e535a-bce1-491b-812a-944c2232d8bf","Type":"ContainerDied","Data":"bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b"} Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.724924 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.726622 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" event={"ID":"ed495323-60c5-4ea1-8990-0d4c3910b7ac","Type":"ContainerDied","Data":"5ee057ac0728f78b4e5f203274c940881d349fa9d791d1f94397c2e701f6a820"} Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.726646 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee057ac0728f78b4e5f203274c940881d349fa9d791d1f94397c2e701f6a820" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.726726 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780723 4857 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780770 4857 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780789 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwmjt\" (UniqueName: \"kubernetes.io/projected/ed495323-60c5-4ea1-8990-0d4c3910b7ac-kube-api-access-wwmjt\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780799 4857 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780810 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.780819 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed495323-60c5-4ea1-8990-0d4c3910b7ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.790712 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.882781 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content\") pod \"735e535a-bce1-491b-812a-944c2232d8bf\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.882875 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcfrd\" (UniqueName: \"kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd\") pod \"735e535a-bce1-491b-812a-944c2232d8bf\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.882921 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities\") pod \"735e535a-bce1-491b-812a-944c2232d8bf\" (UID: \"735e535a-bce1-491b-812a-944c2232d8bf\") " Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.883953 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities" (OuterVolumeSpecName: "utilities") pod "735e535a-bce1-491b-812a-944c2232d8bf" (UID: "735e535a-bce1-491b-812a-944c2232d8bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.884630 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.888324 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd" (OuterVolumeSpecName: "kube-api-access-mcfrd") pod "735e535a-bce1-491b-812a-944c2232d8bf" (UID: "735e535a-bce1-491b-812a-944c2232d8bf"). InnerVolumeSpecName "kube-api-access-mcfrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:48:16 crc kubenswrapper[4857]: I0318 14:48:16.987284 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcfrd\" (UniqueName: \"kubernetes.io/projected/735e535a-bce1-491b-812a-944c2232d8bf-kube-api-access-mcfrd\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.048945 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "735e535a-bce1-491b-812a-944c2232d8bf" (UID: "735e535a-bce1-491b-812a-944c2232d8bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.091947 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735e535a-bce1-491b-812a-944c2232d8bf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.738657 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sxmw7" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.777991 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9"] Mar 18 14:48:17 crc kubenswrapper[4857]: E0318 14:48:17.778729 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="extract-content" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.778761 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="extract-content" Mar 18 14:48:17 crc kubenswrapper[4857]: E0318 14:48:17.778771 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.778778 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" Mar 18 14:48:17 crc kubenswrapper[4857]: E0318 14:48:17.778834 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="extract-utilities" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.778843 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="extract-utilities" Mar 18 14:48:17 crc kubenswrapper[4857]: E0318 14:48:17.778852 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed495323-60c5-4ea1-8990-0d4c3910b7ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.778860 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed495323-60c5-4ea1-8990-0d4c3910b7ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 18 14:48:17 crc kubenswrapper[4857]: E0318 14:48:17.778882 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d136ecf4-b974-463f-acd4-bda38ec47748" containerName="oc" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.778888 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d136ecf4-b974-463f-acd4-bda38ec47748" containerName="oc" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.779163 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d136ecf4-b974-463f-acd4-bda38ec47748" containerName="oc" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.779177 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed495323-60c5-4ea1-8990-0d4c3910b7ac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.779192 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e535a-bce1-491b-812a-944c2232d8bf" containerName="registry-server" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.780285 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.785335 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.785529 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.785584 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.785687 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.785996 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.812934 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.824032 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sxmw7"] Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.840251 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9"] Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.949180 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.949790 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.949872 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.950088 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:17 crc kubenswrapper[4857]: I0318 14:48:17.950131 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zshk\" (UniqueName: \"kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.052133 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.052203 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.052330 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.052378 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zshk\" (UniqueName: \"kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.052526 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.058455 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.059329 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.061000 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.061434 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.072570 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zshk\" (UniqueName: \"kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:18 crc kubenswrapper[4857]: I0318 14:48:18.106088 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:48:19 crc kubenswrapper[4857]: I0318 14:48:19.076216 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9"] Mar 18 14:48:19 crc kubenswrapper[4857]: I0318 14:48:19.177927 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735e535a-bce1-491b-812a-944c2232d8bf" path="/var/lib/kubelet/pods/735e535a-bce1-491b-812a-944c2232d8bf/volumes" Mar 18 14:48:19 crc kubenswrapper[4857]: I0318 14:48:19.945479 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" event={"ID":"cfcf59a9-242d-4953-9276-a0d09a4d3030","Type":"ContainerStarted","Data":"7dfe4f251c1f7f321a6870b5270bb5be788bc67efc8acfefc6e317fb60212dc9"} Mar 18 14:48:20 crc kubenswrapper[4857]: E0318 14:48:20.703645 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:20 crc kubenswrapper[4857]: I0318 14:48:20.993068 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" event={"ID":"cfcf59a9-242d-4953-9276-a0d09a4d3030","Type":"ContainerStarted","Data":"fd5ccd6e27e2012c52ac95f918d8d102041d11ed6eb7b7f3234420b25d6c4cd2"} Mar 18 14:48:24 crc kubenswrapper[4857]: E0318 14:48:24.369856 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:34 crc kubenswrapper[4857]: E0318 14:48:34.820966 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:35 crc kubenswrapper[4857]: E0318 14:48:35.333579 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:45 crc kubenswrapper[4857]: E0318 14:48:45.202006 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:48 crc kubenswrapper[4857]: E0318 14:48:48.108185 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:48 crc kubenswrapper[4857]: E0318 14:48:48.109922 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:50 crc kubenswrapper[4857]: E0318 14:48:50.341875 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:48:55 crc kubenswrapper[4857]: E0318 14:48:55.510049 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:49:05 crc kubenswrapper[4857]: E0318 14:49:05.098187 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache]" Mar 18 14:49:05 crc kubenswrapper[4857]: E0318 14:49:05.564339 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:49:15 crc kubenswrapper[4857]: E0318 14:49:15.947665 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice/crio-bea2838ab84868234709e034dccc41af7977ac724505fdc6c9ee24df58fd536b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735e535a_bce1_491b_812a_944c2232d8bf.slice\": RecentStats: unable to find data in memory cache]" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.115237 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" podStartSLOduration=89.488393513 podStartE2EDuration="1m30.114660461s" podCreationTimestamp="2026-03-18 14:48:17 +0000 UTC" firstStartedPulling="2026-03-18 14:48:19.09200653 +0000 UTC m=+2883.221134987" lastFinishedPulling="2026-03-18 14:48:19.718273458 +0000 UTC m=+2883.847401935" observedRunningTime="2026-03-18 14:48:21.041104016 +0000 UTC m=+2885.170232473" watchObservedRunningTime="2026-03-18 14:49:47.114660461 +0000 UTC m=+2971.243788918" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.127949 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-89qls"] Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.136515 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.163484 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89qls"] Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.300617 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-catalog-content\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.300713 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-utilities\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.300772 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b458b\" (UniqueName: \"kubernetes.io/projected/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-kube-api-access-b458b\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.403770 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-catalog-content\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.405033 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-utilities\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.405085 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b458b\" (UniqueName: \"kubernetes.io/projected/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-kube-api-access-b458b\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.404922 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-catalog-content\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.405868 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-utilities\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.429074 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b458b\" (UniqueName: \"kubernetes.io/projected/2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc-kube-api-access-b458b\") pod \"community-operators-89qls\" (UID: \"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc\") " pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:47 crc kubenswrapper[4857]: I0318 14:49:47.481896 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:49:48 crc kubenswrapper[4857]: I0318 14:49:48.048584 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89qls"] Mar 18 14:49:48 crc kubenswrapper[4857]: I0318 14:49:48.132552 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerStarted","Data":"7500054a852b30fa5ae2a129f19796089dee6f3529744400c75141e65779d8da"} Mar 18 14:49:49 crc kubenswrapper[4857]: I0318 14:49:49.145515 4857 generic.go:334] "Generic (PLEG): container finished" podID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerID="fe2aa108da541b3813e749acfa1ebb1ad92314bec5bad885c7c9c2273677866a" exitCode=0 Mar 18 14:49:49 crc kubenswrapper[4857]: I0318 14:49:49.145989 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerDied","Data":"fe2aa108da541b3813e749acfa1ebb1ad92314bec5bad885c7c9c2273677866a"} Mar 18 14:49:57 crc kubenswrapper[4857]: I0318 14:49:57.038815 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:49:57 crc kubenswrapper[4857]: I0318 14:49:57.039380 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:49:59 crc kubenswrapper[4857]: I0318 14:49:59.548494 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerStarted","Data":"2418e10c0596ff9ca91530af89a73fe22de204c9906b6eafd18cf8804f84b2ca"} Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.170932 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564090-9kxjx"] Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.173132 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.176539 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.181282 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.181741 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.187542 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564090-9kxjx"] Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.292862 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2ql\" (UniqueName: \"kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql\") pod \"auto-csr-approver-29564090-9kxjx\" (UID: \"9eb214a3-d887-4e37-b64e-89c873da1282\") " pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.397375 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2ql\" (UniqueName: \"kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql\") pod \"auto-csr-approver-29564090-9kxjx\" (UID: \"9eb214a3-d887-4e37-b64e-89c873da1282\") " pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.653390 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2ql\" (UniqueName: \"kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql\") pod \"auto-csr-approver-29564090-9kxjx\" (UID: \"9eb214a3-d887-4e37-b64e-89c873da1282\") " pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.674071 4857 generic.go:334] "Generic (PLEG): container finished" podID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerID="2418e10c0596ff9ca91530af89a73fe22de204c9906b6eafd18cf8804f84b2ca" exitCode=0 Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.674161 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerDied","Data":"2418e10c0596ff9ca91530af89a73fe22de204c9906b6eafd18cf8804f84b2ca"} Mar 18 14:50:00 crc kubenswrapper[4857]: I0318 14:50:00.815633 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:01 crc kubenswrapper[4857]: I0318 14:50:01.340983 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564090-9kxjx"] Mar 18 14:50:01 crc kubenswrapper[4857]: W0318 14:50:01.352282 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb214a3_d887_4e37_b64e_89c873da1282.slice/crio-04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58 WatchSource:0}: Error finding container 04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58: Status 404 returned error can't find the container with id 04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58 Mar 18 14:50:01 crc kubenswrapper[4857]: I0318 14:50:01.697582 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" event={"ID":"9eb214a3-d887-4e37-b64e-89c873da1282","Type":"ContainerStarted","Data":"04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58"} Mar 18 14:50:01 crc kubenswrapper[4857]: I0318 14:50:01.705680 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerStarted","Data":"6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c"} Mar 18 14:50:01 crc kubenswrapper[4857]: I0318 14:50:01.732349 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-89qls" podStartSLOduration=3.564115717 podStartE2EDuration="15.732302932s" podCreationTimestamp="2026-03-18 14:49:46 +0000 UTC" firstStartedPulling="2026-03-18 14:49:49.148290075 +0000 UTC m=+2973.277418532" lastFinishedPulling="2026-03-18 14:50:01.31647729 +0000 UTC m=+2985.445605747" observedRunningTime="2026-03-18 14:50:01.725245704 +0000 UTC m=+2985.854374161" watchObservedRunningTime="2026-03-18 14:50:01.732302932 +0000 UTC m=+2985.861431409" Mar 18 14:50:04 crc kubenswrapper[4857]: I0318 14:50:04.763456 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" event={"ID":"9eb214a3-d887-4e37-b64e-89c873da1282","Type":"ContainerStarted","Data":"279b56bd529c369248f67383590111c9a25dd6b7884d93956e575810ff2ab8cd"} Mar 18 14:50:04 crc kubenswrapper[4857]: I0318 14:50:04.792144 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" podStartSLOduration=2.021771072 podStartE2EDuration="4.79212273s" podCreationTimestamp="2026-03-18 14:50:00 +0000 UTC" firstStartedPulling="2026-03-18 14:50:01.359639707 +0000 UTC m=+2985.488768164" lastFinishedPulling="2026-03-18 14:50:04.129991375 +0000 UTC m=+2988.259119822" observedRunningTime="2026-03-18 14:50:04.791119524 +0000 UTC m=+2988.920247981" watchObservedRunningTime="2026-03-18 14:50:04.79212273 +0000 UTC m=+2988.921251187" Mar 18 14:50:05 crc kubenswrapper[4857]: I0318 14:50:05.783114 4857 generic.go:334] "Generic (PLEG): container finished" podID="9eb214a3-d887-4e37-b64e-89c873da1282" containerID="279b56bd529c369248f67383590111c9a25dd6b7884d93956e575810ff2ab8cd" exitCode=0 Mar 18 14:50:05 crc kubenswrapper[4857]: I0318 14:50:05.783476 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" event={"ID":"9eb214a3-d887-4e37-b64e-89c873da1282","Type":"ContainerDied","Data":"279b56bd529c369248f67383590111c9a25dd6b7884d93956e575810ff2ab8cd"} Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.283926 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.482198 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.482263 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m2ql\" (UniqueName: \"kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql\") pod \"9eb214a3-d887-4e37-b64e-89c873da1282\" (UID: \"9eb214a3-d887-4e37-b64e-89c873da1282\") " Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.483851 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.490247 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql" (OuterVolumeSpecName: "kube-api-access-7m2ql") pod "9eb214a3-d887-4e37-b64e-89c873da1282" (UID: "9eb214a3-d887-4e37-b64e-89c873da1282"). InnerVolumeSpecName "kube-api-access-7m2ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.543295 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.590403 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m2ql\" (UniqueName: \"kubernetes.io/projected/9eb214a3-d887-4e37-b64e-89c873da1282-kube-api-access-7m2ql\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.809223 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" event={"ID":"9eb214a3-d887-4e37-b64e-89c873da1282","Type":"ContainerDied","Data":"04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58"} Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.809300 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ef62a51f618cc99ae8d785a77c1d682a7f451a0578384ab5f035f29b94cb58" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.809242 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564090-9kxjx" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.870687 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-89qls" Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.900341 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564084-27q8j"] Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.914591 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564084-27q8j"] Mar 18 14:50:07 crc kubenswrapper[4857]: I0318 14:50:07.988241 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-89qls"] Mar 18 14:50:08 crc kubenswrapper[4857]: I0318 14:50:08.087720 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:50:08 crc kubenswrapper[4857]: I0318 14:50:08.088171 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z72sl" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" containerID="cri-o://750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" gracePeriod=2 Mar 18 14:50:08 crc kubenswrapper[4857]: E0318 14:50:08.593352 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d is running failed: container process not found" containerID="750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:50:08 crc kubenswrapper[4857]: E0318 14:50:08.593962 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d is running failed: container process not found" containerID="750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:50:08 crc kubenswrapper[4857]: E0318 14:50:08.595480 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d is running failed: container process not found" containerID="750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 14:50:08 crc kubenswrapper[4857]: E0318 14:50:08.595515 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-z72sl" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" Mar 18 14:50:08 crc kubenswrapper[4857]: E0318 14:50:08.641326 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b6552eb_f07b_40da_90fd_60354bc668d7.slice/crio-conmon-750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b6552eb_f07b_40da_90fd_60354bc668d7.slice/crio-750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d.scope\": RecentStats: unable to find data in memory cache]" Mar 18 14:50:08 crc kubenswrapper[4857]: I0318 14:50:08.925424 4857 generic.go:334] "Generic (PLEG): container finished" podID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerID="750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" exitCode=0 Mar 18 14:50:08 crc kubenswrapper[4857]: I0318 14:50:08.926995 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerDied","Data":"750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d"} Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.091375 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.124508 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") pod \"0b6552eb-f07b-40da-90fd-60354bc668d7\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.124607 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities\") pod \"0b6552eb-f07b-40da-90fd-60354bc668d7\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.124734 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqnwf\" (UniqueName: \"kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf\") pod \"0b6552eb-f07b-40da-90fd-60354bc668d7\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.141296 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities" (OuterVolumeSpecName: "utilities") pod "0b6552eb-f07b-40da-90fd-60354bc668d7" (UID: "0b6552eb-f07b-40da-90fd-60354bc668d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.153984 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf" (OuterVolumeSpecName: "kube-api-access-qqnwf") pod "0b6552eb-f07b-40da-90fd-60354bc668d7" (UID: "0b6552eb-f07b-40da-90fd-60354bc668d7"). InnerVolumeSpecName "kube-api-access-qqnwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.193532 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e93ceb-db89-4e04-8d42-d598ad3d8579" path="/var/lib/kubelet/pods/b1e93ceb-db89-4e04-8d42-d598ad3d8579/volumes" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.227830 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqnwf\" (UniqueName: \"kubernetes.io/projected/0b6552eb-f07b-40da-90fd-60354bc668d7-kube-api-access-qqnwf\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.227869 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.329978 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b6552eb-f07b-40da-90fd-60354bc668d7" (UID: "0b6552eb-f07b-40da-90fd-60354bc668d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.330380 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") pod \"0b6552eb-f07b-40da-90fd-60354bc668d7\" (UID: \"0b6552eb-f07b-40da-90fd-60354bc668d7\") " Mar 18 14:50:09 crc kubenswrapper[4857]: W0318 14:50:09.330542 4857 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0b6552eb-f07b-40da-90fd-60354bc668d7/volumes/kubernetes.io~empty-dir/catalog-content Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.330569 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b6552eb-f07b-40da-90fd-60354bc668d7" (UID: "0b6552eb-f07b-40da-90fd-60354bc668d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.331468 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6552eb-f07b-40da-90fd-60354bc668d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.939841 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z72sl" Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.939902 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z72sl" event={"ID":"0b6552eb-f07b-40da-90fd-60354bc668d7","Type":"ContainerDied","Data":"26bec054273a1e10b100d2d74ba8f7c495da190ae26ec044582380e5a815b1ee"} Mar 18 14:50:09 crc kubenswrapper[4857]: I0318 14:50:09.939981 4857 scope.go:117] "RemoveContainer" containerID="750c4a75eafcf77388df11715b5540773aecc0a0c609e9d0b9b53b52d60f066d" Mar 18 14:50:10 crc kubenswrapper[4857]: I0318 14:50:09.996558 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:50:10 crc kubenswrapper[4857]: I0318 14:50:10.010298 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z72sl"] Mar 18 14:50:10 crc kubenswrapper[4857]: I0318 14:50:10.196256 4857 scope.go:117] "RemoveContainer" containerID="4f815b6c7ef00ae11dc45cbd88ebbf109b0e47be90a954d90d956658e936e4e2" Mar 18 14:50:10 crc kubenswrapper[4857]: I0318 14:50:10.272937 4857 scope.go:117] "RemoveContainer" containerID="1dcf310c885817f3748da77729c2418279f69b69d5cace618f12a06091a09e76" Mar 18 14:50:11 crc kubenswrapper[4857]: I0318 14:50:11.275055 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" path="/var/lib/kubelet/pods/0b6552eb-f07b-40da-90fd-60354bc668d7/volumes" Mar 18 14:50:15 crc kubenswrapper[4857]: I0318 14:50:15.281131 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:50:15 crc kubenswrapper[4857]: I0318 14:50:15.282073 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:50:15 crc kubenswrapper[4857]: I0318 14:50:15.643253 4857 scope.go:117] "RemoveContainer" containerID="303eecf131d7b3cbd1ee56267851f15603fb6a9c3922c3f2dcfdcdd7d7cd1d28" Mar 18 14:50:27 crc kubenswrapper[4857]: I0318 14:50:27.038939 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:50:27 crc kubenswrapper[4857]: I0318 14:50:27.039504 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.830645 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:37 crc kubenswrapper[4857]: E0318 14:50:37.832002 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="extract-content" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832035 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="extract-content" Mar 18 14:50:37 crc kubenswrapper[4857]: E0318 14:50:37.832083 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="extract-utilities" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832091 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="extract-utilities" Mar 18 14:50:37 crc kubenswrapper[4857]: E0318 14:50:37.832120 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832126 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" Mar 18 14:50:37 crc kubenswrapper[4857]: E0318 14:50:37.832141 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb214a3-d887-4e37-b64e-89c873da1282" containerName="oc" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832147 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb214a3-d887-4e37-b64e-89c873da1282" containerName="oc" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832407 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6552eb-f07b-40da-90fd-60354bc668d7" containerName="registry-server" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.832433 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb214a3-d887-4e37-b64e-89c873da1282" containerName="oc" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.834459 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.842684 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.842839 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.843056 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chq5p\" (UniqueName: \"kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.844182 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.945928 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chq5p\" (UniqueName: \"kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.946211 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.946297 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.946870 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.946911 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:37 crc kubenswrapper[4857]: I0318 14:50:37.977357 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chq5p\" (UniqueName: \"kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p\") pod \"redhat-marketplace-xpbcx\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:38 crc kubenswrapper[4857]: I0318 14:50:38.163287 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:38 crc kubenswrapper[4857]: I0318 14:50:38.772527 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:38 crc kubenswrapper[4857]: W0318 14:50:38.785737 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e1e3f1_9dde_400d_9b1b_33d7e2274c39.slice/crio-8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e WatchSource:0}: Error finding container 8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e: Status 404 returned error can't find the container with id 8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e Mar 18 14:50:38 crc kubenswrapper[4857]: I0318 14:50:38.886926 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerStarted","Data":"8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e"} Mar 18 14:50:39 crc kubenswrapper[4857]: I0318 14:50:39.898977 4857 generic.go:334] "Generic (PLEG): container finished" podID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerID="0d568e5c008c2b8127739b6859995e60051bdce1a3fd293eaa67da5e57a5db79" exitCode=0 Mar 18 14:50:39 crc kubenswrapper[4857]: I0318 14:50:39.899096 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerDied","Data":"0d568e5c008c2b8127739b6859995e60051bdce1a3fd293eaa67da5e57a5db79"} Mar 18 14:50:39 crc kubenswrapper[4857]: I0318 14:50:39.901652 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:50:40 crc kubenswrapper[4857]: I0318 14:50:40.917288 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerStarted","Data":"f673fd50757082bdebb0b1888f8aafdabee7198545aff4c3058e684b9c008cc2"} Mar 18 14:50:41 crc kubenswrapper[4857]: I0318 14:50:41.930904 4857 generic.go:334] "Generic (PLEG): container finished" podID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerID="f673fd50757082bdebb0b1888f8aafdabee7198545aff4c3058e684b9c008cc2" exitCode=0 Mar 18 14:50:41 crc kubenswrapper[4857]: I0318 14:50:41.930965 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerDied","Data":"f673fd50757082bdebb0b1888f8aafdabee7198545aff4c3058e684b9c008cc2"} Mar 18 14:50:42 crc kubenswrapper[4857]: I0318 14:50:42.947262 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerStarted","Data":"b4d07ea368ecd1f2c83ce1370eb7655532d995cf8c006ab78a7c79d714359435"} Mar 18 14:50:42 crc kubenswrapper[4857]: I0318 14:50:42.985226 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xpbcx" podStartSLOduration=3.268078244 podStartE2EDuration="5.985199892s" podCreationTimestamp="2026-03-18 14:50:37 +0000 UTC" firstStartedPulling="2026-03-18 14:50:39.901350108 +0000 UTC m=+3024.030478555" lastFinishedPulling="2026-03-18 14:50:42.618471736 +0000 UTC m=+3026.747600203" observedRunningTime="2026-03-18 14:50:42.973417675 +0000 UTC m=+3027.102546162" watchObservedRunningTime="2026-03-18 14:50:42.985199892 +0000 UTC m=+3027.114328349" Mar 18 14:50:48 crc kubenswrapper[4857]: I0318 14:50:48.163478 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:48 crc kubenswrapper[4857]: I0318 14:50:48.164362 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:48 crc kubenswrapper[4857]: I0318 14:50:48.213519 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:49 crc kubenswrapper[4857]: I0318 14:50:49.085541 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:49 crc kubenswrapper[4857]: I0318 14:50:49.179887 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:51 crc kubenswrapper[4857]: I0318 14:50:51.063705 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xpbcx" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="registry-server" containerID="cri-o://b4d07ea368ecd1f2c83ce1370eb7655532d995cf8c006ab78a7c79d714359435" gracePeriod=2 Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.082202 4857 generic.go:334] "Generic (PLEG): container finished" podID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerID="b4d07ea368ecd1f2c83ce1370eb7655532d995cf8c006ab78a7c79d714359435" exitCode=0 Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.082304 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerDied","Data":"b4d07ea368ecd1f2c83ce1370eb7655532d995cf8c006ab78a7c79d714359435"} Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.082543 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xpbcx" event={"ID":"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39","Type":"ContainerDied","Data":"8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e"} Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.082568 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b57c13c2810c0f521172392a8a6af5c0d0f6686be33faff9717c56b5ccc890e" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.192488 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.343036 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content\") pod \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.343356 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities\") pod \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.343449 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chq5p\" (UniqueName: \"kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p\") pod \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\" (UID: \"a8e1e3f1-9dde-400d-9b1b-33d7e2274c39\") " Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.344300 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities" (OuterVolumeSpecName: "utilities") pod "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" (UID: "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.344458 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.349945 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p" (OuterVolumeSpecName: "kube-api-access-chq5p") pod "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" (UID: "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39"). InnerVolumeSpecName "kube-api-access-chq5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.391117 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" (UID: "a8e1e3f1-9dde-400d-9b1b-33d7e2274c39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.447367 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:52 crc kubenswrapper[4857]: I0318 14:50:52.447425 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chq5p\" (UniqueName: \"kubernetes.io/projected/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39-kube-api-access-chq5p\") on node \"crc\" DevicePath \"\"" Mar 18 14:50:53 crc kubenswrapper[4857]: I0318 14:50:53.094818 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xpbcx" Mar 18 14:50:53 crc kubenswrapper[4857]: I0318 14:50:53.140228 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:53 crc kubenswrapper[4857]: I0318 14:50:53.151306 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xpbcx"] Mar 18 14:50:53 crc kubenswrapper[4857]: I0318 14:50:53.193162 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" path="/var/lib/kubelet/pods/a8e1e3f1-9dde-400d-9b1b-33d7e2274c39/volumes" Mar 18 14:50:57 crc kubenswrapper[4857]: I0318 14:50:57.038552 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:50:57 crc kubenswrapper[4857]: I0318 14:50:57.039107 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:50:57 crc kubenswrapper[4857]: I0318 14:50:57.039193 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:50:57 crc kubenswrapper[4857]: I0318 14:50:57.040903 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:50:57 crc kubenswrapper[4857]: I0318 14:50:57.041083 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" gracePeriod=600 Mar 18 14:50:57 crc kubenswrapper[4857]: E0318 14:50:57.180358 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:50:58 crc kubenswrapper[4857]: I0318 14:50:58.161742 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" exitCode=0 Mar 18 14:50:58 crc kubenswrapper[4857]: I0318 14:50:58.161838 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df"} Mar 18 14:50:58 crc kubenswrapper[4857]: I0318 14:50:58.162168 4857 scope.go:117] "RemoveContainer" containerID="67082775edf3bf416157d2bfb37d893041f468c0a3bcce0521133c4fea429fed" Mar 18 14:50:58 crc kubenswrapper[4857]: I0318 14:50:58.164324 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:50:58 crc kubenswrapper[4857]: E0318 14:50:58.164973 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.945242 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:00 crc kubenswrapper[4857]: E0318 14:51:00.946197 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="registry-server" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.946215 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="registry-server" Mar 18 14:51:00 crc kubenswrapper[4857]: E0318 14:51:00.946250 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="extract-utilities" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.946256 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="extract-utilities" Mar 18 14:51:00 crc kubenswrapper[4857]: E0318 14:51:00.946283 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="extract-content" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.946291 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="extract-content" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.946535 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e1e3f1-9dde-400d-9b1b-33d7e2274c39" containerName="registry-server" Mar 18 14:51:00 crc kubenswrapper[4857]: I0318 14:51:00.949603 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.003824 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.131277 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwff\" (UniqueName: \"kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.131479 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.132012 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.235092 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.235442 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwff\" (UniqueName: \"kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.235516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.235844 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.236151 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.258818 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwff\" (UniqueName: \"kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff\") pod \"certified-operators-578s7\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.276353 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:01 crc kubenswrapper[4857]: I0318 14:51:01.856859 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:02 crc kubenswrapper[4857]: I0318 14:51:02.211803 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerStarted","Data":"4d54b710b50d7e925124a652203ed0ce66e2d46816dd3ef17ff13132da64ea00"} Mar 18 14:51:03 crc kubenswrapper[4857]: I0318 14:51:03.237041 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerID="cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058" exitCode=0 Mar 18 14:51:03 crc kubenswrapper[4857]: I0318 14:51:03.237134 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerDied","Data":"cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058"} Mar 18 14:51:06 crc kubenswrapper[4857]: I0318 14:51:06.289605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerStarted","Data":"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a"} Mar 18 14:51:11 crc kubenswrapper[4857]: I0318 14:51:11.164967 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:51:11 crc kubenswrapper[4857]: E0318 14:51:11.166237 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:51:11 crc kubenswrapper[4857]: I0318 14:51:11.362309 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerID="4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a" exitCode=0 Mar 18 14:51:11 crc kubenswrapper[4857]: I0318 14:51:11.362387 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerDied","Data":"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a"} Mar 18 14:51:16 crc kubenswrapper[4857]: I0318 14:51:16.432610 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerStarted","Data":"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34"} Mar 18 14:51:16 crc kubenswrapper[4857]: I0318 14:51:16.468021 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-578s7" podStartSLOduration=3.988053377 podStartE2EDuration="16.46798524s" podCreationTimestamp="2026-03-18 14:51:00 +0000 UTC" firstStartedPulling="2026-03-18 14:51:03.249324312 +0000 UTC m=+3047.378452779" lastFinishedPulling="2026-03-18 14:51:15.729256185 +0000 UTC m=+3059.858384642" observedRunningTime="2026-03-18 14:51:16.454399378 +0000 UTC m=+3060.583527835" watchObservedRunningTime="2026-03-18 14:51:16.46798524 +0000 UTC m=+3060.597113697" Mar 18 14:51:21 crc kubenswrapper[4857]: I0318 14:51:21.276808 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:21 crc kubenswrapper[4857]: I0318 14:51:21.277552 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:21 crc kubenswrapper[4857]: I0318 14:51:21.351789 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:21 crc kubenswrapper[4857]: I0318 14:51:21.549348 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:21 crc kubenswrapper[4857]: I0318 14:51:21.616191 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:23 crc kubenswrapper[4857]: I0318 14:51:23.522584 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-578s7" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="registry-server" containerID="cri-o://e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34" gracePeriod=2 Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.091391 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.128164 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content\") pod \"fe3a8043-6283-4fcc-a136-03a1033a5707\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.128338 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwff\" (UniqueName: \"kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff\") pod \"fe3a8043-6283-4fcc-a136-03a1033a5707\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.128377 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities\") pod \"fe3a8043-6283-4fcc-a136-03a1033a5707\" (UID: \"fe3a8043-6283-4fcc-a136-03a1033a5707\") " Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.130043 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities" (OuterVolumeSpecName: "utilities") pod "fe3a8043-6283-4fcc-a136-03a1033a5707" (UID: "fe3a8043-6283-4fcc-a136-03a1033a5707"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.170859 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.171378 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff" (OuterVolumeSpecName: "kube-api-access-2cwff") pod "fe3a8043-6283-4fcc-a136-03a1033a5707" (UID: "fe3a8043-6283-4fcc-a136-03a1033a5707"). InnerVolumeSpecName "kube-api-access-2cwff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:51:24 crc kubenswrapper[4857]: E0318 14:51:24.171521 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.190116 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe3a8043-6283-4fcc-a136-03a1033a5707" (UID: "fe3a8043-6283-4fcc-a136-03a1033a5707"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.232270 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cwff\" (UniqueName: \"kubernetes.io/projected/fe3a8043-6283-4fcc-a136-03a1033a5707-kube-api-access-2cwff\") on node \"crc\" DevicePath \"\"" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.232309 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.232319 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3a8043-6283-4fcc-a136-03a1033a5707-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.538666 4857 generic.go:334] "Generic (PLEG): container finished" podID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerID="e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34" exitCode=0 Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.538728 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-578s7" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.538769 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerDied","Data":"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34"} Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.540187 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-578s7" event={"ID":"fe3a8043-6283-4fcc-a136-03a1033a5707","Type":"ContainerDied","Data":"4d54b710b50d7e925124a652203ed0ce66e2d46816dd3ef17ff13132da64ea00"} Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.540246 4857 scope.go:117] "RemoveContainer" containerID="e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.570362 4857 scope.go:117] "RemoveContainer" containerID="4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.596333 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.608109 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-578s7"] Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.613517 4857 scope.go:117] "RemoveContainer" containerID="cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.673627 4857 scope.go:117] "RemoveContainer" containerID="e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34" Mar 18 14:51:24 crc kubenswrapper[4857]: E0318 14:51:24.674378 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34\": container with ID starting with e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34 not found: ID does not exist" containerID="e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.674430 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34"} err="failed to get container status \"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34\": rpc error: code = NotFound desc = could not find container \"e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34\": container with ID starting with e22ab1080000d0e6173acd9e7521dbdbce2e413dcf076df075a82796c8606b34 not found: ID does not exist" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.674462 4857 scope.go:117] "RemoveContainer" containerID="4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a" Mar 18 14:51:24 crc kubenswrapper[4857]: E0318 14:51:24.675018 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a\": container with ID starting with 4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a not found: ID does not exist" containerID="4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.675052 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a"} err="failed to get container status \"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a\": rpc error: code = NotFound desc = could not find container \"4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a\": container with ID starting with 4438cc23da791dc696517022b57d3101ee905300da9735f9c781e5f0ef05d16a not found: ID does not exist" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.675073 4857 scope.go:117] "RemoveContainer" containerID="cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058" Mar 18 14:51:24 crc kubenswrapper[4857]: E0318 14:51:24.675920 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058\": container with ID starting with cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058 not found: ID does not exist" containerID="cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058" Mar 18 14:51:24 crc kubenswrapper[4857]: I0318 14:51:24.675955 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058"} err="failed to get container status \"cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058\": rpc error: code = NotFound desc = could not find container \"cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058\": container with ID starting with cd54af7119ff43e38ad850326a25f85f7997083bde76f5eb152e8bf0049cb058 not found: ID does not exist" Mar 18 14:51:25 crc kubenswrapper[4857]: I0318 14:51:25.200896 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" path="/var/lib/kubelet/pods/fe3a8043-6283-4fcc-a136-03a1033a5707/volumes" Mar 18 14:51:36 crc kubenswrapper[4857]: I0318 14:51:36.164966 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:51:36 crc kubenswrapper[4857]: E0318 14:51:36.166269 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:51:49 crc kubenswrapper[4857]: I0318 14:51:49.165148 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:51:49 crc kubenswrapper[4857]: E0318 14:51:49.165910 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.208303 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:52:00 crc kubenswrapper[4857]: E0318 14:52:00.209359 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.278902 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564092-f5pp8"] Mar 18 14:52:00 crc kubenswrapper[4857]: E0318 14:52:00.279703 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="extract-content" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.279731 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="extract-content" Mar 18 14:52:00 crc kubenswrapper[4857]: E0318 14:52:00.279813 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="extract-utilities" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.279824 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="extract-utilities" Mar 18 14:52:00 crc kubenswrapper[4857]: E0318 14:52:00.279854 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="registry-server" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.279862 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="registry-server" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.280142 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3a8043-6283-4fcc-a136-03a1033a5707" containerName="registry-server" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.281327 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.283642 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.287423 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.287679 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.291186 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564092-f5pp8"] Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.416219 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwb8\" (UniqueName: \"kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8\") pod \"auto-csr-approver-29564092-f5pp8\" (UID: \"bd963214-d435-4dd3-b38e-0c8339918824\") " pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.518626 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwb8\" (UniqueName: \"kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8\") pod \"auto-csr-approver-29564092-f5pp8\" (UID: \"bd963214-d435-4dd3-b38e-0c8339918824\") " pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.542022 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwb8\" (UniqueName: \"kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8\") pod \"auto-csr-approver-29564092-f5pp8\" (UID: \"bd963214-d435-4dd3-b38e-0c8339918824\") " pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:00 crc kubenswrapper[4857]: I0318 14:52:00.608882 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:01 crc kubenswrapper[4857]: I0318 14:52:01.158827 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564092-f5pp8"] Mar 18 14:52:01 crc kubenswrapper[4857]: I0318 14:52:01.525934 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" event={"ID":"bd963214-d435-4dd3-b38e-0c8339918824","Type":"ContainerStarted","Data":"463f8e50196c4cea244901d70fd1785e1756e6a05def2a746c8999255dfbe15e"} Mar 18 14:52:06 crc kubenswrapper[4857]: I0318 14:52:06.348603 4857 generic.go:334] "Generic (PLEG): container finished" podID="bd963214-d435-4dd3-b38e-0c8339918824" containerID="ae62422fd619a5702e55b322fceb26848da7baf34f0c03d56fbc9ea4adf811fe" exitCode=0 Mar 18 14:52:06 crc kubenswrapper[4857]: I0318 14:52:06.349244 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" event={"ID":"bd963214-d435-4dd3-b38e-0c8339918824","Type":"ContainerDied","Data":"ae62422fd619a5702e55b322fceb26848da7baf34f0c03d56fbc9ea4adf811fe"} Mar 18 14:52:07 crc kubenswrapper[4857]: I0318 14:52:07.784947 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:07 crc kubenswrapper[4857]: I0318 14:52:07.832059 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbwb8\" (UniqueName: \"kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8\") pod \"bd963214-d435-4dd3-b38e-0c8339918824\" (UID: \"bd963214-d435-4dd3-b38e-0c8339918824\") " Mar 18 14:52:07 crc kubenswrapper[4857]: I0318 14:52:07.845539 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8" (OuterVolumeSpecName: "kube-api-access-cbwb8") pod "bd963214-d435-4dd3-b38e-0c8339918824" (UID: "bd963214-d435-4dd3-b38e-0c8339918824"). InnerVolumeSpecName "kube-api-access-cbwb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.220583 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbwb8\" (UniqueName: \"kubernetes.io/projected/bd963214-d435-4dd3-b38e-0c8339918824-kube-api-access-cbwb8\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.372589 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" event={"ID":"bd963214-d435-4dd3-b38e-0c8339918824","Type":"ContainerDied","Data":"463f8e50196c4cea244901d70fd1785e1756e6a05def2a746c8999255dfbe15e"} Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.372645 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463f8e50196c4cea244901d70fd1785e1756e6a05def2a746c8999255dfbe15e" Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.372724 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564092-f5pp8" Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.933951 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564086-kd45d"] Mar 18 14:52:08 crc kubenswrapper[4857]: I0318 14:52:08.951669 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564086-kd45d"] Mar 18 14:52:09 crc kubenswrapper[4857]: I0318 14:52:09.180760 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363aabfa-9ff9-4f1f-bed5-05790896082a" path="/var/lib/kubelet/pods/363aabfa-9ff9-4f1f-bed5-05790896082a/volumes" Mar 18 14:52:13 crc kubenswrapper[4857]: I0318 14:52:13.435880 4857 generic.go:334] "Generic (PLEG): container finished" podID="cfcf59a9-242d-4953-9276-a0d09a4d3030" containerID="fd5ccd6e27e2012c52ac95f918d8d102041d11ed6eb7b7f3234420b25d6c4cd2" exitCode=0 Mar 18 14:52:13 crc kubenswrapper[4857]: I0318 14:52:13.435962 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" event={"ID":"cfcf59a9-242d-4953-9276-a0d09a4d3030","Type":"ContainerDied","Data":"fd5ccd6e27e2012c52ac95f918d8d102041d11ed6eb7b7f3234420b25d6c4cd2"} Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.010589 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.042819 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam\") pod \"cfcf59a9-242d-4953-9276-a0d09a4d3030\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.042863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle\") pod \"cfcf59a9-242d-4953-9276-a0d09a4d3030\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.042893 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0\") pod \"cfcf59a9-242d-4953-9276-a0d09a4d3030\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.043786 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zshk\" (UniqueName: \"kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk\") pod \"cfcf59a9-242d-4953-9276-a0d09a4d3030\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.043975 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory\") pod \"cfcf59a9-242d-4953-9276-a0d09a4d3030\" (UID: \"cfcf59a9-242d-4953-9276-a0d09a4d3030\") " Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.336956 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cfcf59a9-242d-4953-9276-a0d09a4d3030" (UID: "cfcf59a9-242d-4953-9276-a0d09a4d3030"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.337296 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk" (OuterVolumeSpecName: "kube-api-access-6zshk") pod "cfcf59a9-242d-4953-9276-a0d09a4d3030" (UID: "cfcf59a9-242d-4953-9276-a0d09a4d3030"). InnerVolumeSpecName "kube-api-access-6zshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.349879 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.358330 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zshk\" (UniqueName: \"kubernetes.io/projected/cfcf59a9-242d-4953-9276-a0d09a4d3030-kube-api-access-6zshk\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:15 crc kubenswrapper[4857]: E0318 14:52:15.359545 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.362061 4857 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.370109 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "cfcf59a9-242d-4953-9276-a0d09a4d3030" (UID: "cfcf59a9-242d-4953-9276-a0d09a4d3030"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.394171 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cfcf59a9-242d-4953-9276-a0d09a4d3030" (UID: "cfcf59a9-242d-4953-9276-a0d09a4d3030"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.414098 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory" (OuterVolumeSpecName: "inventory") pod "cfcf59a9-242d-4953-9276-a0d09a4d3030" (UID: "cfcf59a9-242d-4953-9276-a0d09a4d3030"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.464184 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.464223 4857 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.464234 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfcf59a9-242d-4953-9276-a0d09a4d3030-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.466618 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.501255 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9" event={"ID":"cfcf59a9-242d-4953-9276-a0d09a4d3030","Type":"ContainerDied","Data":"7dfe4f251c1f7f321a6870b5270bb5be788bc67efc8acfefc6e317fb60212dc9"} Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.501319 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfe4f251c1f7f321a6870b5270bb5be788bc67efc8acfefc6e317fb60212dc9" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.634525 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z"] Mar 18 14:52:15 crc kubenswrapper[4857]: E0318 14:52:15.635323 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcf59a9-242d-4953-9276-a0d09a4d3030" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.635345 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcf59a9-242d-4953-9276-a0d09a4d3030" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 18 14:52:15 crc kubenswrapper[4857]: E0318 14:52:15.635408 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd963214-d435-4dd3-b38e-0c8339918824" containerName="oc" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.635417 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd963214-d435-4dd3-b38e-0c8339918824" containerName="oc" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.635731 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfcf59a9-242d-4953-9276-a0d09a4d3030" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.635778 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd963214-d435-4dd3-b38e-0c8339918824" containerName="oc" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.636905 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.641043 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.641070 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.641177 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.641344 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.641361 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.642471 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.642474 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.658461 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z"] Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.669292 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.669401 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.669459 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670043 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670149 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670221 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670255 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670343 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsbg\" (UniqueName: \"kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670641 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670705 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.670855 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774321 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774405 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774455 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774491 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774560 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsbg\" (UniqueName: \"kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774615 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774682 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774717 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.774927 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.775061 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.775158 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.775570 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.779668 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.780610 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.781569 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.781985 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.782327 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.783201 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.785337 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.786786 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.792181 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.793735 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsbg\" (UniqueName: \"kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-nm64z\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.836263 4857 scope.go:117] "RemoveContainer" containerID="07d4ba9378bea97ab4cebcc2d0a8190b46f7f1ea28cd72523c23180eb69aeedf" Mar 18 14:52:15 crc kubenswrapper[4857]: I0318 14:52:15.962300 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:52:16 crc kubenswrapper[4857]: I0318 14:52:16.564630 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z"] Mar 18 14:52:17 crc kubenswrapper[4857]: I0318 14:52:17.491717 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" event={"ID":"9608ecda-882a-47d8-97e1-73eace0dfcb7","Type":"ContainerStarted","Data":"2ff369ee85c067fe748954098aa87bdeb7fd3d9d8fd2c7e6baf16a36fc61ed04"} Mar 18 14:52:18 crc kubenswrapper[4857]: I0318 14:52:18.560932 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:52:19 crc kubenswrapper[4857]: I0318 14:52:19.520666 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" event={"ID":"9608ecda-882a-47d8-97e1-73eace0dfcb7","Type":"ContainerStarted","Data":"0d632c4c6ec4b43ff1a14aaeaadbffdac432f602e386f028007503c8bfce52b9"} Mar 18 14:52:19 crc kubenswrapper[4857]: I0318 14:52:19.547561 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" podStartSLOduration=2.568186734 podStartE2EDuration="4.547531762s" podCreationTimestamp="2026-03-18 14:52:15 +0000 UTC" firstStartedPulling="2026-03-18 14:52:16.574694175 +0000 UTC m=+3120.703822642" lastFinishedPulling="2026-03-18 14:52:18.554039213 +0000 UTC m=+3122.683167670" observedRunningTime="2026-03-18 14:52:19.539418678 +0000 UTC m=+3123.668547145" watchObservedRunningTime="2026-03-18 14:52:19.547531762 +0000 UTC m=+3123.676660219" Mar 18 14:52:27 crc kubenswrapper[4857]: I0318 14:52:27.200479 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:52:27 crc kubenswrapper[4857]: E0318 14:52:27.201565 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:52:41 crc kubenswrapper[4857]: I0318 14:52:41.242381 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:52:41 crc kubenswrapper[4857]: E0318 14:52:41.243253 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:52:56 crc kubenswrapper[4857]: I0318 14:52:56.164973 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:52:56 crc kubenswrapper[4857]: E0318 14:52:56.166422 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:53:09 crc kubenswrapper[4857]: I0318 14:53:09.165575 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:53:09 crc kubenswrapper[4857]: E0318 14:53:09.167051 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:53:24 crc kubenswrapper[4857]: I0318 14:53:24.165341 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:53:24 crc kubenswrapper[4857]: E0318 14:53:24.166812 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:53:38 crc kubenswrapper[4857]: I0318 14:53:38.164513 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:53:38 crc kubenswrapper[4857]: E0318 14:53:38.165444 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:53:49 crc kubenswrapper[4857]: I0318 14:53:49.165938 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:53:49 crc kubenswrapper[4857]: E0318 14:53:49.170297 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.187727 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564094-p7bb4"] Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.191064 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.194772 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.195109 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.195186 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.203463 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564094-p7bb4"] Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.226625 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gt4\" (UniqueName: \"kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4\") pod \"auto-csr-approver-29564094-p7bb4\" (UID: \"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2\") " pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.330158 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2gt4\" (UniqueName: \"kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4\") pod \"auto-csr-approver-29564094-p7bb4\" (UID: \"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2\") " pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.354517 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2gt4\" (UniqueName: \"kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4\") pod \"auto-csr-approver-29564094-p7bb4\" (UID: \"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2\") " pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:00 crc kubenswrapper[4857]: I0318 14:54:00.520451 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:01 crc kubenswrapper[4857]: I0318 14:54:01.086984 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564094-p7bb4"] Mar 18 14:54:02 crc kubenswrapper[4857]: I0318 14:54:02.165022 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:54:02 crc kubenswrapper[4857]: E0318 14:54:02.165312 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:02 crc kubenswrapper[4857]: I0318 14:54:02.176962 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" event={"ID":"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2","Type":"ContainerStarted","Data":"b0cdd198752f4698b5af7e8117c09df9dceb757ebc1e1d55e5d4a38ebe72f17f"} Mar 18 14:54:03 crc kubenswrapper[4857]: I0318 14:54:03.191149 4857 generic.go:334] "Generic (PLEG): container finished" podID="d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" containerID="49c984d649fd6e870af76f201592d11de9c8f403ed252453e755d19e66db3ebc" exitCode=0 Mar 18 14:54:03 crc kubenswrapper[4857]: I0318 14:54:03.191605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" event={"ID":"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2","Type":"ContainerDied","Data":"49c984d649fd6e870af76f201592d11de9c8f403ed252453e755d19e66db3ebc"} Mar 18 14:54:04 crc kubenswrapper[4857]: I0318 14:54:04.637239 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:04 crc kubenswrapper[4857]: I0318 14:54:04.775528 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2gt4\" (UniqueName: \"kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4\") pod \"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2\" (UID: \"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2\") " Mar 18 14:54:04 crc kubenswrapper[4857]: I0318 14:54:04.797660 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4" (OuterVolumeSpecName: "kube-api-access-j2gt4") pod "d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" (UID: "d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2"). InnerVolumeSpecName "kube-api-access-j2gt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:54:04 crc kubenswrapper[4857]: I0318 14:54:04.879726 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2gt4\" (UniqueName: \"kubernetes.io/projected/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2-kube-api-access-j2gt4\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:05 crc kubenswrapper[4857]: I0318 14:54:05.217561 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" event={"ID":"d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2","Type":"ContainerDied","Data":"b0cdd198752f4698b5af7e8117c09df9dceb757ebc1e1d55e5d4a38ebe72f17f"} Mar 18 14:54:05 crc kubenswrapper[4857]: I0318 14:54:05.217676 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0cdd198752f4698b5af7e8117c09df9dceb757ebc1e1d55e5d4a38ebe72f17f" Mar 18 14:54:05 crc kubenswrapper[4857]: I0318 14:54:05.217608 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564094-p7bb4" Mar 18 14:54:05 crc kubenswrapper[4857]: I0318 14:54:05.739422 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564088-xfswd"] Mar 18 14:54:05 crc kubenswrapper[4857]: I0318 14:54:05.756235 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564088-xfswd"] Mar 18 14:54:07 crc kubenswrapper[4857]: I0318 14:54:07.255477 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d136ecf4-b974-463f-acd4-bda38ec47748" path="/var/lib/kubelet/pods/d136ecf4-b974-463f-acd4-bda38ec47748/volumes" Mar 18 14:54:15 crc kubenswrapper[4857]: I0318 14:54:15.983401 4857 scope.go:117] "RemoveContainer" containerID="0703ad3665df2dbeb2ed979dfe1037c8853fcdcb88c42059e54f4c9a1de04b4c" Mar 18 14:54:16 crc kubenswrapper[4857]: I0318 14:54:16.022804 4857 scope.go:117] "RemoveContainer" containerID="0759f6761c278fcc7ca669ad05fe18dc1167ab4980891414447668b7558a49c2" Mar 18 14:54:16 crc kubenswrapper[4857]: I0318 14:54:16.096959 4857 scope.go:117] "RemoveContainer" containerID="52eb4a34aad31b4e0902f5ac8ac0ef6b5c5eecb5fabf091e392b9eb8ea2b24e3" Mar 18 14:54:16 crc kubenswrapper[4857]: I0318 14:54:16.166237 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:54:16 crc kubenswrapper[4857]: E0318 14:54:16.166683 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:16 crc kubenswrapper[4857]: I0318 14:54:16.197502 4857 scope.go:117] "RemoveContainer" containerID="a7f87a3d466f935136565358361d9239bcb1f9a606c0621984a368e08f5d061d" Mar 18 14:54:29 crc kubenswrapper[4857]: I0318 14:54:29.164653 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:54:29 crc kubenswrapper[4857]: E0318 14:54:29.165767 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:42 crc kubenswrapper[4857]: I0318 14:54:42.164467 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:54:42 crc kubenswrapper[4857]: E0318 14:54:42.166453 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:51 crc kubenswrapper[4857]: I0318 14:54:51.857472 4857 generic.go:334] "Generic (PLEG): container finished" podID="9608ecda-882a-47d8-97e1-73eace0dfcb7" containerID="0d632c4c6ec4b43ff1a14aaeaadbffdac432f602e386f028007503c8bfce52b9" exitCode=0 Mar 18 14:54:51 crc kubenswrapper[4857]: I0318 14:54:51.857976 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" event={"ID":"9608ecda-882a-47d8-97e1-73eace0dfcb7","Type":"ContainerDied","Data":"0d632c4c6ec4b43ff1a14aaeaadbffdac432f602e386f028007503c8bfce52b9"} Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.542634 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709314 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709394 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709584 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709627 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqsbg\" (UniqueName: \"kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709669 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709700 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709873 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709917 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.709999 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.710078 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.710119 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2\") pod \"9608ecda-882a-47d8-97e1-73eace0dfcb7\" (UID: \"9608ecda-882a-47d8-97e1-73eace0dfcb7\") " Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.716727 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.738291 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg" (OuterVolumeSpecName: "kube-api-access-cqsbg") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "kube-api-access-cqsbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.779090 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.794167 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.816741 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.816827 4857 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.816848 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqsbg\" (UniqueName: \"kubernetes.io/projected/9608ecda-882a-47d8-97e1-73eace0dfcb7-kube-api-access-cqsbg\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.816865 4857 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.820179 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.822111 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory" (OuterVolumeSpecName: "inventory") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.848587 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.850633 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.852357 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.859203 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.866984 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9608ecda-882a-47d8-97e1-73eace0dfcb7" (UID: "9608ecda-882a-47d8-97e1-73eace0dfcb7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.885132 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" event={"ID":"9608ecda-882a-47d8-97e1-73eace0dfcb7","Type":"ContainerDied","Data":"2ff369ee85c067fe748954098aa87bdeb7fd3d9d8fd2c7e6baf16a36fc61ed04"} Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.885273 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff369ee85c067fe748954098aa87bdeb7fd3d9d8fd2c7e6baf16a36fc61ed04" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.885361 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-nm64z" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.929993 4857 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.930949 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.930973 4857 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.930984 4857 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.930994 4857 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.931004 4857 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:53 crc kubenswrapper[4857]: I0318 14:54:53.931013 4857 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9608ecda-882a-47d8-97e1-73eace0dfcb7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.046172 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9"] Mar 18 14:54:54 crc kubenswrapper[4857]: E0318 14:54:54.047245 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" containerName="oc" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.047292 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" containerName="oc" Mar 18 14:54:54 crc kubenswrapper[4857]: E0318 14:54:54.047367 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9608ecda-882a-47d8-97e1-73eace0dfcb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.047381 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9608ecda-882a-47d8-97e1-73eace0dfcb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.048613 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" containerName="oc" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.048702 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9608ecda-882a-47d8-97e1-73eace0dfcb7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.049972 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.054405 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.054604 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.054726 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.054975 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.058368 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.063636 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9"] Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.242914 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243076 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243160 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243193 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243340 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243544 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7bb\" (UniqueName: \"kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.243827 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.347498 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.348278 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.348607 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.349427 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.350612 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7bb\" (UniqueName: \"kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.350829 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.350944 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.352995 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.353290 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.354373 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.354411 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.354972 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.359041 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.371791 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7bb\" (UniqueName: \"kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:54 crc kubenswrapper[4857]: I0318 14:54:54.386314 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:54:55 crc kubenswrapper[4857]: I0318 14:54:55.059519 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9"] Mar 18 14:54:55 crc kubenswrapper[4857]: I0318 14:54:55.163738 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:54:55 crc kubenswrapper[4857]: E0318 14:54:55.164091 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:54:55 crc kubenswrapper[4857]: I0318 14:54:55.911236 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" event={"ID":"bd20a145-8f96-4a05-b051-38f2e6edc1ad","Type":"ContainerStarted","Data":"1a09787b42a1bcfc2db68754ca1f95f6ee9ee70de9acd40daf29b52d9c1ce6bc"} Mar 18 14:54:58 crc kubenswrapper[4857]: I0318 14:54:58.958990 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" event={"ID":"bd20a145-8f96-4a05-b051-38f2e6edc1ad","Type":"ContainerStarted","Data":"b52363145ecb4db3c73475dce81d28da51e6a2bf37dc4d7eb8bd99629f166857"} Mar 18 14:54:58 crc kubenswrapper[4857]: I0318 14:54:58.979409 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" podStartSLOduration=2.130752758 podStartE2EDuration="4.979345527s" podCreationTimestamp="2026-03-18 14:54:54 +0000 UTC" firstStartedPulling="2026-03-18 14:54:55.063741716 +0000 UTC m=+3279.192870173" lastFinishedPulling="2026-03-18 14:54:57.912334485 +0000 UTC m=+3282.041462942" observedRunningTime="2026-03-18 14:54:58.976359732 +0000 UTC m=+3283.105488179" watchObservedRunningTime="2026-03-18 14:54:58.979345527 +0000 UTC m=+3283.108473984" Mar 18 14:55:10 crc kubenswrapper[4857]: I0318 14:55:10.164458 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:55:10 crc kubenswrapper[4857]: E0318 14:55:10.165288 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:55:22 crc kubenswrapper[4857]: I0318 14:55:22.164231 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:55:22 crc kubenswrapper[4857]: E0318 14:55:22.165099 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:55:35 crc kubenswrapper[4857]: I0318 14:55:35.288851 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:55:35 crc kubenswrapper[4857]: E0318 14:55:35.302337 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:55:49 crc kubenswrapper[4857]: I0318 14:55:49.165074 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:55:49 crc kubenswrapper[4857]: E0318 14:55:49.166112 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.190908 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564096-srzqb"] Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.193808 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.196742 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.197455 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.206742 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.208440 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564096-srzqb"] Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.386182 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9w48\" (UniqueName: \"kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48\") pod \"auto-csr-approver-29564096-srzqb\" (UID: \"ae2d6530-cf73-400d-a128-6e22c36e1098\") " pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.492113 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9w48\" (UniqueName: \"kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48\") pod \"auto-csr-approver-29564096-srzqb\" (UID: \"ae2d6530-cf73-400d-a128-6e22c36e1098\") " pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:00 crc kubenswrapper[4857]: I0318 14:56:00.543005 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9w48\" (UniqueName: \"kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48\") pod \"auto-csr-approver-29564096-srzqb\" (UID: \"ae2d6530-cf73-400d-a128-6e22c36e1098\") " pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:01 crc kubenswrapper[4857]: I0318 14:56:01.017956 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:01 crc kubenswrapper[4857]: I0318 14:56:01.676690 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564096-srzqb"] Mar 18 14:56:01 crc kubenswrapper[4857]: I0318 14:56:01.683136 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 14:56:02 crc kubenswrapper[4857]: I0318 14:56:02.633154 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564096-srzqb" event={"ID":"ae2d6530-cf73-400d-a128-6e22c36e1098","Type":"ContainerStarted","Data":"dc13e465dd4d7273f0654cecda09250fe0930aef600e2b06e2752929fef17e0a"} Mar 18 14:56:04 crc kubenswrapper[4857]: I0318 14:56:04.164640 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:56:04 crc kubenswrapper[4857]: I0318 14:56:04.774071 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71"} Mar 18 14:56:05 crc kubenswrapper[4857]: I0318 14:56:05.804008 4857 generic.go:334] "Generic (PLEG): container finished" podID="ae2d6530-cf73-400d-a128-6e22c36e1098" containerID="c1726e8879537751e25b8d85bb24876c6b46609755de5b8808bb7dee9af4c4cb" exitCode=0 Mar 18 14:56:05 crc kubenswrapper[4857]: I0318 14:56:05.804186 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564096-srzqb" event={"ID":"ae2d6530-cf73-400d-a128-6e22c36e1098","Type":"ContainerDied","Data":"c1726e8879537751e25b8d85bb24876c6b46609755de5b8808bb7dee9af4c4cb"} Mar 18 14:56:07 crc kubenswrapper[4857]: I0318 14:56:07.316310 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:07 crc kubenswrapper[4857]: I0318 14:56:07.512486 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9w48\" (UniqueName: \"kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48\") pod \"ae2d6530-cf73-400d-a128-6e22c36e1098\" (UID: \"ae2d6530-cf73-400d-a128-6e22c36e1098\") " Mar 18 14:56:07 crc kubenswrapper[4857]: I0318 14:56:07.544138 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48" (OuterVolumeSpecName: "kube-api-access-j9w48") pod "ae2d6530-cf73-400d-a128-6e22c36e1098" (UID: "ae2d6530-cf73-400d-a128-6e22c36e1098"). InnerVolumeSpecName "kube-api-access-j9w48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:56:07 crc kubenswrapper[4857]: I0318 14:56:07.616803 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9w48\" (UniqueName: \"kubernetes.io/projected/ae2d6530-cf73-400d-a128-6e22c36e1098-kube-api-access-j9w48\") on node \"crc\" DevicePath \"\"" Mar 18 14:56:08 crc kubenswrapper[4857]: I0318 14:56:08.041616 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564096-srzqb" event={"ID":"ae2d6530-cf73-400d-a128-6e22c36e1098","Type":"ContainerDied","Data":"dc13e465dd4d7273f0654cecda09250fe0930aef600e2b06e2752929fef17e0a"} Mar 18 14:56:08 crc kubenswrapper[4857]: I0318 14:56:08.041929 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc13e465dd4d7273f0654cecda09250fe0930aef600e2b06e2752929fef17e0a" Mar 18 14:56:08 crc kubenswrapper[4857]: I0318 14:56:08.041736 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564096-srzqb" Mar 18 14:56:08 crc kubenswrapper[4857]: I0318 14:56:08.399056 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564090-9kxjx"] Mar 18 14:56:08 crc kubenswrapper[4857]: I0318 14:56:08.415475 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564090-9kxjx"] Mar 18 14:56:09 crc kubenswrapper[4857]: I0318 14:56:09.177877 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb214a3-d887-4e37-b64e-89c873da1282" path="/var/lib/kubelet/pods/9eb214a3-d887-4e37-b64e-89c873da1282/volumes" Mar 18 14:56:16 crc kubenswrapper[4857]: I0318 14:56:16.316679 4857 scope.go:117] "RemoveContainer" containerID="279b56bd529c369248f67383590111c9a25dd6b7884d93956e575810ff2ab8cd" Mar 18 14:57:16 crc kubenswrapper[4857]: I0318 14:57:16.428010 4857 scope.go:117] "RemoveContainer" containerID="b4d07ea368ecd1f2c83ce1370eb7655532d995cf8c006ab78a7c79d714359435" Mar 18 14:57:16 crc kubenswrapper[4857]: I0318 14:57:16.456774 4857 scope.go:117] "RemoveContainer" containerID="f673fd50757082bdebb0b1888f8aafdabee7198545aff4c3058e684b9c008cc2" Mar 18 14:57:16 crc kubenswrapper[4857]: I0318 14:57:16.484584 4857 scope.go:117] "RemoveContainer" containerID="0d568e5c008c2b8127739b6859995e60051bdce1a3fd293eaa67da5e57a5db79" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.618433 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:57:36 crc kubenswrapper[4857]: E0318 14:57:36.619870 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d6530-cf73-400d-a128-6e22c36e1098" containerName="oc" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.619896 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d6530-cf73-400d-a128-6e22c36e1098" containerName="oc" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.620279 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2d6530-cf73-400d-a128-6e22c36e1098" containerName="oc" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.622791 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.636331 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.684418 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.684706 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.684862 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgjb\" (UniqueName: \"kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.787685 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.788038 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.788249 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtgjb\" (UniqueName: \"kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.788507 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.788511 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.808720 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtgjb\" (UniqueName: \"kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb\") pod \"redhat-operators-mkgwv\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:36 crc kubenswrapper[4857]: I0318 14:57:36.957890 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:37 crc kubenswrapper[4857]: I0318 14:57:37.558422 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:57:37 crc kubenswrapper[4857]: I0318 14:57:37.683069 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerStarted","Data":"c61945d0c0a830c61f0ca18383751e54e5372271c18398467873cd5852a7a462"} Mar 18 14:57:38 crc kubenswrapper[4857]: I0318 14:57:38.700677 4857 generic.go:334] "Generic (PLEG): container finished" podID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerID="246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a" exitCode=0 Mar 18 14:57:38 crc kubenswrapper[4857]: I0318 14:57:38.700782 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerDied","Data":"246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a"} Mar 18 14:57:39 crc kubenswrapper[4857]: I0318 14:57:39.712989 4857 generic.go:334] "Generic (PLEG): container finished" podID="bd20a145-8f96-4a05-b051-38f2e6edc1ad" containerID="b52363145ecb4db3c73475dce81d28da51e6a2bf37dc4d7eb8bd99629f166857" exitCode=0 Mar 18 14:57:39 crc kubenswrapper[4857]: I0318 14:57:39.713086 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" event={"ID":"bd20a145-8f96-4a05-b051-38f2e6edc1ad","Type":"ContainerDied","Data":"b52363145ecb4db3c73475dce81d28da51e6a2bf37dc4d7eb8bd99629f166857"} Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.258689 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.432572 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433001 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433035 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn7bb\" (UniqueName: \"kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433178 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433293 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433406 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.433532 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam\") pod \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\" (UID: \"bd20a145-8f96-4a05-b051-38f2e6edc1ad\") " Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.439395 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.456239 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb" (OuterVolumeSpecName: "kube-api-access-fn7bb") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "kube-api-access-fn7bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.475918 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.476668 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.492444 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.493057 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory" (OuterVolumeSpecName: "inventory") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.511036 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd20a145-8f96-4a05-b051-38f2e6edc1ad" (UID: "bd20a145-8f96-4a05-b051-38f2e6edc1ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.537450 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.537720 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.537905 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.538062 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn7bb\" (UniqueName: \"kubernetes.io/projected/bd20a145-8f96-4a05-b051-38f2e6edc1ad-kube-api-access-fn7bb\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.538182 4857 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.538269 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.538350 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bd20a145-8f96-4a05-b051-38f2e6edc1ad-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.745128 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" event={"ID":"bd20a145-8f96-4a05-b051-38f2e6edc1ad","Type":"ContainerDied","Data":"1a09787b42a1bcfc2db68754ca1f95f6ee9ee70de9acd40daf29b52d9c1ce6bc"} Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.745536 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a09787b42a1bcfc2db68754ca1f95f6ee9ee70de9acd40daf29b52d9c1ce6bc" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.745241 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.868202 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d"] Mar 18 14:57:41 crc kubenswrapper[4857]: E0318 14:57:41.868943 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd20a145-8f96-4a05-b051-38f2e6edc1ad" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.868967 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd20a145-8f96-4a05-b051-38f2e6edc1ad" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.869309 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd20a145-8f96-4a05-b051-38f2e6edc1ad" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.870357 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.873334 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.873985 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.874189 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.874435 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.874695 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:57:41 crc kubenswrapper[4857]: I0318 14:57:41.887625 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d"] Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054519 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054578 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054615 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054639 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054714 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4d8\" (UniqueName: \"kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.054905 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.055081 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158052 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4d8\" (UniqueName: \"kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158185 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158355 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158427 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158452 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158477 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.158493 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.162764 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.163596 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.166682 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.170990 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.171388 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.171513 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.183773 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4d8\" (UniqueName: \"kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:42 crc kubenswrapper[4857]: I0318 14:57:42.192911 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:57:44 crc kubenswrapper[4857]: I0318 14:57:44.817344 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d"] Mar 18 14:57:44 crc kubenswrapper[4857]: W0318 14:57:44.823150 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod839c8978_90ec_42f6_9adb_6ca8ec295f61.slice/crio-d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c WatchSource:0}: Error finding container d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c: Status 404 returned error can't find the container with id d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c Mar 18 14:57:45 crc kubenswrapper[4857]: I0318 14:57:45.803398 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerStarted","Data":"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156"} Mar 18 14:57:45 crc kubenswrapper[4857]: I0318 14:57:45.805278 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" event={"ID":"839c8978-90ec-42f6-9adb-6ca8ec295f61","Type":"ContainerStarted","Data":"d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c"} Mar 18 14:57:48 crc kubenswrapper[4857]: I0318 14:57:48.957190 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" event={"ID":"839c8978-90ec-42f6-9adb-6ca8ec295f61","Type":"ContainerStarted","Data":"61a7df3111b9e41eb74af9fcc2d903bc605085c04b254b6b1b4c5caa8b25c70f"} Mar 18 14:57:48 crc kubenswrapper[4857]: I0318 14:57:48.988504 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" podStartSLOduration=4.783116257 podStartE2EDuration="7.988461729s" podCreationTimestamp="2026-03-18 14:57:41 +0000 UTC" firstStartedPulling="2026-03-18 14:57:44.827599184 +0000 UTC m=+3448.956727641" lastFinishedPulling="2026-03-18 14:57:48.032944666 +0000 UTC m=+3452.162073113" observedRunningTime="2026-03-18 14:57:48.987654208 +0000 UTC m=+3453.116782665" watchObservedRunningTime="2026-03-18 14:57:48.988461729 +0000 UTC m=+3453.117590186" Mar 18 14:57:53 crc kubenswrapper[4857]: I0318 14:57:53.016374 4857 generic.go:334] "Generic (PLEG): container finished" podID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerID="4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156" exitCode=0 Mar 18 14:57:53 crc kubenswrapper[4857]: I0318 14:57:53.016462 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerDied","Data":"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156"} Mar 18 14:57:55 crc kubenswrapper[4857]: I0318 14:57:55.041226 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerStarted","Data":"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f"} Mar 18 14:57:55 crc kubenswrapper[4857]: I0318 14:57:55.072367 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkgwv" podStartSLOduration=3.422611123 podStartE2EDuration="19.072335123s" podCreationTimestamp="2026-03-18 14:57:36 +0000 UTC" firstStartedPulling="2026-03-18 14:57:38.70371124 +0000 UTC m=+3442.832839697" lastFinishedPulling="2026-03-18 14:57:54.35343523 +0000 UTC m=+3458.482563697" observedRunningTime="2026-03-18 14:57:55.059920819 +0000 UTC m=+3459.189049276" watchObservedRunningTime="2026-03-18 14:57:55.072335123 +0000 UTC m=+3459.201463590" Mar 18 14:57:56 crc kubenswrapper[4857]: I0318 14:57:56.959252 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:56 crc kubenswrapper[4857]: I0318 14:57:56.959788 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:57:58 crc kubenswrapper[4857]: I0318 14:57:58.311481 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkgwv" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" probeResult="failure" output=< Mar 18 14:57:58 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:57:58 crc kubenswrapper[4857]: > Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.159525 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564098-72plm"] Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.162357 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.165961 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.166034 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.166168 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.180793 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564098-72plm"] Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.361110 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppnv\" (UniqueName: \"kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv\") pod \"auto-csr-approver-29564098-72plm\" (UID: \"9242b541-2ecc-48ba-b447-00fee2a3b85c\") " pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.464170 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xppnv\" (UniqueName: \"kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv\") pod \"auto-csr-approver-29564098-72plm\" (UID: \"9242b541-2ecc-48ba-b447-00fee2a3b85c\") " pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.487086 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xppnv\" (UniqueName: \"kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv\") pod \"auto-csr-approver-29564098-72plm\" (UID: \"9242b541-2ecc-48ba-b447-00fee2a3b85c\") " pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:00 crc kubenswrapper[4857]: I0318 14:58:00.495056 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:01 crc kubenswrapper[4857]: I0318 14:58:01.051309 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564098-72plm"] Mar 18 14:58:01 crc kubenswrapper[4857]: E0318 14:58:01.399121 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 18 14:58:01 crc kubenswrapper[4857]: E0318 14:58:01.400487 4857 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 14:58:01 crc kubenswrapper[4857]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 18 14:58:01 crc kubenswrapper[4857]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xppnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29564098-72plm_openshift-infra(9242b541-2ecc-48ba-b447-00fee2a3b85c): ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway Mar 18 14:58:01 crc kubenswrapper[4857]: > logger="UnhandledError" Mar 18 14:58:01 crc kubenswrapper[4857]: I0318 14:58:01.400147 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564098-72plm" event={"ID":"9242b541-2ecc-48ba-b447-00fee2a3b85c","Type":"ContainerStarted","Data":"c40d84cf6b53b3fd90a6ddd462b34e214732e2bfdbc044173e58b3343e2e5a2e"} Mar 18 14:58:01 crc kubenswrapper[4857]: E0318 14:58:01.401888 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 502 Bad Gateway\"" pod="openshift-infra/auto-csr-approver-29564098-72plm" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" Mar 18 14:58:02 crc kubenswrapper[4857]: E0318 14:58:02.412957 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29564098-72plm" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" Mar 18 14:58:08 crc kubenswrapper[4857]: I0318 14:58:08.025043 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkgwv" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" probeResult="failure" output=< Mar 18 14:58:08 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:58:08 crc kubenswrapper[4857]: > Mar 18 14:58:10 crc kubenswrapper[4857]: I0318 14:58:10.826975 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:58:10 crc kubenswrapper[4857]: I0318 14:58:10.826977 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 14:58:18 crc kubenswrapper[4857]: I0318 14:58:18.018287 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkgwv" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" probeResult="failure" output=< Mar 18 14:58:18 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:58:18 crc kubenswrapper[4857]: > Mar 18 14:58:20 crc kubenswrapper[4857]: I0318 14:58:20.331121 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564098-72plm" event={"ID":"9242b541-2ecc-48ba-b447-00fee2a3b85c","Type":"ContainerStarted","Data":"568d7d1f73077dd93d51e4c7e68de24b7fbf1b8e970831b1b76c61d308581b5d"} Mar 18 14:58:20 crc kubenswrapper[4857]: I0318 14:58:20.351636 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564098-72plm" podStartSLOduration=1.668063493 podStartE2EDuration="20.351615275s" podCreationTimestamp="2026-03-18 14:58:00 +0000 UTC" firstStartedPulling="2026-03-18 14:58:01.064121162 +0000 UTC m=+3465.193249619" lastFinishedPulling="2026-03-18 14:58:19.747672934 +0000 UTC m=+3483.876801401" observedRunningTime="2026-03-18 14:58:20.34666078 +0000 UTC m=+3484.475789237" watchObservedRunningTime="2026-03-18 14:58:20.351615275 +0000 UTC m=+3484.480743732" Mar 18 14:58:21 crc kubenswrapper[4857]: I0318 14:58:21.345911 4857 generic.go:334] "Generic (PLEG): container finished" podID="9242b541-2ecc-48ba-b447-00fee2a3b85c" containerID="568d7d1f73077dd93d51e4c7e68de24b7fbf1b8e970831b1b76c61d308581b5d" exitCode=0 Mar 18 14:58:21 crc kubenswrapper[4857]: I0318 14:58:21.345976 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564098-72plm" event={"ID":"9242b541-2ecc-48ba-b447-00fee2a3b85c","Type":"ContainerDied","Data":"568d7d1f73077dd93d51e4c7e68de24b7fbf1b8e970831b1b76c61d308581b5d"} Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.258550 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.381572 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564098-72plm" event={"ID":"9242b541-2ecc-48ba-b447-00fee2a3b85c","Type":"ContainerDied","Data":"c40d84cf6b53b3fd90a6ddd462b34e214732e2bfdbc044173e58b3343e2e5a2e"} Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.381651 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40d84cf6b53b3fd90a6ddd462b34e214732e2bfdbc044173e58b3343e2e5a2e" Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.381721 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564098-72plm" Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.435330 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564092-f5pp8"] Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.438256 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xppnv\" (UniqueName: \"kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv\") pod \"9242b541-2ecc-48ba-b447-00fee2a3b85c\" (UID: \"9242b541-2ecc-48ba-b447-00fee2a3b85c\") " Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.447718 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564092-f5pp8"] Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.450943 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv" (OuterVolumeSpecName: "kube-api-access-xppnv") pod "9242b541-2ecc-48ba-b447-00fee2a3b85c" (UID: "9242b541-2ecc-48ba-b447-00fee2a3b85c"). InnerVolumeSpecName "kube-api-access-xppnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:58:23 crc kubenswrapper[4857]: I0318 14:58:23.542325 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xppnv\" (UniqueName: \"kubernetes.io/projected/9242b541-2ecc-48ba-b447-00fee2a3b85c-kube-api-access-xppnv\") on node \"crc\" DevicePath \"\"" Mar 18 14:58:25 crc kubenswrapper[4857]: I0318 14:58:25.194439 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd963214-d435-4dd3-b38e-0c8339918824" path="/var/lib/kubelet/pods/bd963214-d435-4dd3-b38e-0c8339918824/volumes" Mar 18 14:58:27 crc kubenswrapper[4857]: I0318 14:58:27.038675 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:58:27 crc kubenswrapper[4857]: I0318 14:58:27.039293 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:58:28 crc kubenswrapper[4857]: I0318 14:58:28.169026 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkgwv" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" probeResult="failure" output=< Mar 18 14:58:28 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 14:58:28 crc kubenswrapper[4857]: > Mar 18 14:58:37 crc kubenswrapper[4857]: I0318 14:58:37.023194 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:58:37 crc kubenswrapper[4857]: I0318 14:58:37.084827 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:58:38 crc kubenswrapper[4857]: I0318 14:58:38.213276 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:58:38 crc kubenswrapper[4857]: I0318 14:58:38.233948 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkgwv" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" containerID="cri-o://a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f" gracePeriod=2 Mar 18 14:58:38 crc kubenswrapper[4857]: I0318 14:58:38.866031 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.010274 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content\") pod \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.010416 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities\") pod \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.010482 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtgjb\" (UniqueName: \"kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb\") pod \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\" (UID: \"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc\") " Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.011283 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities" (OuterVolumeSpecName: "utilities") pod "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" (UID: "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.012860 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.020380 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb" (OuterVolumeSpecName: "kube-api-access-jtgjb") pod "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" (UID: "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc"). InnerVolumeSpecName "kube-api-access-jtgjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.115339 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtgjb\" (UniqueName: \"kubernetes.io/projected/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-kube-api-access-jtgjb\") on node \"crc\" DevicePath \"\"" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.183083 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" (UID: "3952abeb-db10-43ad-85a8-4e1ac8d0a1bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.218737 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.251501 4857 generic.go:334] "Generic (PLEG): container finished" podID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerID="a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f" exitCode=0 Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.251560 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerDied","Data":"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f"} Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.251605 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkgwv" event={"ID":"3952abeb-db10-43ad-85a8-4e1ac8d0a1bc","Type":"ContainerDied","Data":"c61945d0c0a830c61f0ca18383751e54e5372271c18398467873cd5852a7a462"} Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.251607 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkgwv" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.251625 4857 scope.go:117] "RemoveContainer" containerID="a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.299219 4857 scope.go:117] "RemoveContainer" containerID="4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.299809 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.311462 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkgwv"] Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.342817 4857 scope.go:117] "RemoveContainer" containerID="246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.380733 4857 scope.go:117] "RemoveContainer" containerID="a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f" Mar 18 14:58:39 crc kubenswrapper[4857]: E0318 14:58:39.381265 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f\": container with ID starting with a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f not found: ID does not exist" containerID="a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.381321 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f"} err="failed to get container status \"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f\": rpc error: code = NotFound desc = could not find container \"a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f\": container with ID starting with a8a12e6a485dfb0391ce2098838b38b9633ebd2ccbc600efa090723daf5a343f not found: ID does not exist" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.381356 4857 scope.go:117] "RemoveContainer" containerID="4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156" Mar 18 14:58:39 crc kubenswrapper[4857]: E0318 14:58:39.381846 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156\": container with ID starting with 4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156 not found: ID does not exist" containerID="4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.381889 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156"} err="failed to get container status \"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156\": rpc error: code = NotFound desc = could not find container \"4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156\": container with ID starting with 4bec3ece08e67ec4c2b6bc225f429f2a40ce0c8ebe4019a3977ce94d30d23156 not found: ID does not exist" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.381917 4857 scope.go:117] "RemoveContainer" containerID="246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a" Mar 18 14:58:39 crc kubenswrapper[4857]: E0318 14:58:39.382143 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a\": container with ID starting with 246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a not found: ID does not exist" containerID="246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a" Mar 18 14:58:39 crc kubenswrapper[4857]: I0318 14:58:39.382176 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a"} err="failed to get container status \"246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a\": rpc error: code = NotFound desc = could not find container \"246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a\": container with ID starting with 246d00d245159d03613c07b0f87853911ec7fd022d5040fb62a89bb0809e750a not found: ID does not exist" Mar 18 14:58:41 crc kubenswrapper[4857]: I0318 14:58:41.182289 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" path="/var/lib/kubelet/pods/3952abeb-db10-43ad-85a8-4e1ac8d0a1bc/volumes" Mar 18 14:58:57 crc kubenswrapper[4857]: I0318 14:58:57.038343 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:58:57 crc kubenswrapper[4857]: I0318 14:58:57.038991 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:59:16 crc kubenswrapper[4857]: I0318 14:59:16.629334 4857 scope.go:117] "RemoveContainer" containerID="ae62422fd619a5702e55b322fceb26848da7baf34f0c03d56fbc9ea4adf811fe" Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.038617 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.039922 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.040022 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.041173 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.041266 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71" gracePeriod=600 Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.696000 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71" exitCode=0 Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.696088 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71"} Mar 18 14:59:27 crc kubenswrapper[4857]: I0318 14:59:27.696483 4857 scope.go:117] "RemoveContainer" containerID="f271b5feb2ef7f0864e13560a41ec44e78ea9a1c462151b726a8172ededbe7df" Mar 18 14:59:28 crc kubenswrapper[4857]: I0318 14:59:28.712393 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec"} Mar 18 14:59:51 crc kubenswrapper[4857]: I0318 14:59:51.590925 4857 generic.go:334] "Generic (PLEG): container finished" podID="839c8978-90ec-42f6-9adb-6ca8ec295f61" containerID="61a7df3111b9e41eb74af9fcc2d903bc605085c04b254b6b1b4c5caa8b25c70f" exitCode=0 Mar 18 14:59:51 crc kubenswrapper[4857]: I0318 14:59:51.591041 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" event={"ID":"839c8978-90ec-42f6-9adb-6ca8ec295f61","Type":"ContainerDied","Data":"61a7df3111b9e41eb74af9fcc2d903bc605085c04b254b6b1b4c5caa8b25c70f"} Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.164885 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.303560 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.303745 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.303954 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.304054 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.304126 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k4d8\" (UniqueName: \"kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.304303 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.304853 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0\") pod \"839c8978-90ec-42f6-9adb-6ca8ec295f61\" (UID: \"839c8978-90ec-42f6-9adb-6ca8ec295f61\") " Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.314161 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8" (OuterVolumeSpecName: "kube-api-access-8k4d8") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "kube-api-access-8k4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.318470 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.353126 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.355125 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.355368 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory" (OuterVolumeSpecName: "inventory") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.371009 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.371092 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "839c8978-90ec-42f6-9adb-6ca8ec295f61" (UID: "839c8978-90ec-42f6-9adb-6ca8ec295f61"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.410349 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k4d8\" (UniqueName: \"kubernetes.io/projected/839c8978-90ec-42f6-9adb-6ca8ec295f61-kube-api-access-8k4d8\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.410725 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.410857 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.411010 4857 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.411117 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.411288 4857 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.411397 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/839c8978-90ec-42f6-9adb-6ca8ec295f61-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.616317 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" event={"ID":"839c8978-90ec-42f6-9adb-6ca8ec295f61","Type":"ContainerDied","Data":"d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c"} Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.616378 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.616437 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d67eaff3a6bda956a58ed4d7c6dedb8b3772692374a101edad9c6f49bdba417c" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.832522 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx"] Mar 18 14:59:53 crc kubenswrapper[4857]: E0318 14:59:53.835653 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" containerName="oc" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.835800 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" containerName="oc" Mar 18 14:59:53 crc kubenswrapper[4857]: E0318 14:59:53.835891 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839c8978-90ec-42f6-9adb-6ca8ec295f61" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.835979 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="839c8978-90ec-42f6-9adb-6ca8ec295f61" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 18 14:59:53 crc kubenswrapper[4857]: E0318 14:59:53.836080 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="extract-content" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.836159 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="extract-content" Mar 18 14:59:53 crc kubenswrapper[4857]: E0318 14:59:53.836231 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="extract-utilities" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.836295 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="extract-utilities" Mar 18 14:59:53 crc kubenswrapper[4857]: E0318 14:59:53.836364 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.836432 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.836872 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="839c8978-90ec-42f6-9adb-6ca8ec295f61" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.837001 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="3952abeb-db10-43ad-85a8-4e1ac8d0a1bc" containerName="registry-server" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.837114 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" containerName="oc" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.838680 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.844223 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.844223 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-mz2v5" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.844619 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.844259 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.845161 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 18 14:59:53 crc kubenswrapper[4857]: I0318 14:59:53.854731 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx"] Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.031844 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.031976 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.032063 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8hm\" (UniqueName: \"kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.032160 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.032288 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.135337 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.135464 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8hm\" (UniqueName: \"kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.135608 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.135845 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.136017 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.140910 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.141654 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.143269 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.144340 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.158332 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8hm\" (UniqueName: \"kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm\") pod \"logging-edpm-deployment-openstack-edpm-ipam-hstgx\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:54 crc kubenswrapper[4857]: I0318 14:59:54.164740 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 14:59:55 crc kubenswrapper[4857]: I0318 14:59:55.020150 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx"] Mar 18 14:59:55 crc kubenswrapper[4857]: W0318 14:59:55.030594 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedcb5d9a_650c_4199_89c1_5f077d3f217f.slice/crio-c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c WatchSource:0}: Error finding container c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c: Status 404 returned error can't find the container with id c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c Mar 18 14:59:55 crc kubenswrapper[4857]: I0318 14:59:55.648273 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" event={"ID":"edcb5d9a-650c-4199-89c1-5f077d3f217f","Type":"ContainerStarted","Data":"c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c"} Mar 18 14:59:57 crc kubenswrapper[4857]: I0318 14:59:57.952889 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" event={"ID":"edcb5d9a-650c-4199-89c1-5f077d3f217f","Type":"ContainerStarted","Data":"1a7ae4f9f7f1da800a0a2f2bb73e79efdf0c867934a04b4273f3c7804801ebca"} Mar 18 14:59:57 crc kubenswrapper[4857]: I0318 14:59:57.986357 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" podStartSLOduration=3.096014889 podStartE2EDuration="4.986308592s" podCreationTimestamp="2026-03-18 14:59:53 +0000 UTC" firstStartedPulling="2026-03-18 14:59:55.036657145 +0000 UTC m=+3579.165785602" lastFinishedPulling="2026-03-18 14:59:56.926950788 +0000 UTC m=+3581.056079305" observedRunningTime="2026-03-18 14:59:57.977996892 +0000 UTC m=+3582.107125349" watchObservedRunningTime="2026-03-18 14:59:57.986308592 +0000 UTC m=+3582.115437049" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.162567 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564100-s2j95"] Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.166383 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.170141 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.170283 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.172801 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.181569 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q"] Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.184046 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.188425 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.189155 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.198205 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.198256 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.198365 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqcvp\" (UniqueName: \"kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp\") pod \"auto-csr-approver-29564100-s2j95\" (UID: \"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5\") " pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.198473 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564100-s2j95"] Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.198611 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvmz\" (UniqueName: \"kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.453954 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqcvp\" (UniqueName: \"kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp\") pod \"auto-csr-approver-29564100-s2j95\" (UID: \"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5\") " pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.454154 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbvmz\" (UniqueName: \"kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.457973 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.458028 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.461915 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.464006 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q"] Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.487514 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.492470 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqcvp\" (UniqueName: \"kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp\") pod \"auto-csr-approver-29564100-s2j95\" (UID: \"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5\") " pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.515496 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbvmz\" (UniqueName: \"kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz\") pod \"collect-profiles-29564100-q8x6q\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.614780 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:00 crc kubenswrapper[4857]: I0318 15:00:00.763293 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:01 crc kubenswrapper[4857]: I0318 15:00:01.197922 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q"] Mar 18 15:00:01 crc kubenswrapper[4857]: I0318 15:00:01.296086 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564100-s2j95"] Mar 18 15:00:02 crc kubenswrapper[4857]: I0318 15:00:02.008944 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564100-s2j95" event={"ID":"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5","Type":"ContainerStarted","Data":"50b3e6e775e6c8e2264628d0fcdc0b539e6af63ef8aa422cfb763e624cea5f1c"} Mar 18 15:00:02 crc kubenswrapper[4857]: I0318 15:00:02.011358 4857 generic.go:334] "Generic (PLEG): container finished" podID="ab05d322-74c0-4edb-b81e-ef6338d60930" containerID="eb466d8ba0bda9256d523667f2c099b03d810b3ca8453b08a11121c835115b02" exitCode=0 Mar 18 15:00:02 crc kubenswrapper[4857]: I0318 15:00:02.011397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" event={"ID":"ab05d322-74c0-4edb-b81e-ef6338d60930","Type":"ContainerDied","Data":"eb466d8ba0bda9256d523667f2c099b03d810b3ca8453b08a11121c835115b02"} Mar 18 15:00:02 crc kubenswrapper[4857]: I0318 15:00:02.011415 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" event={"ID":"ab05d322-74c0-4edb-b81e-ef6338d60930","Type":"ContainerStarted","Data":"dbbb4fca3c2bc507f90427c65a4e4bec12dbd7288d85d7c748424a4dad50c8ae"} Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.441958 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.473829 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume\") pod \"ab05d322-74c0-4edb-b81e-ef6338d60930\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.474313 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume\") pod \"ab05d322-74c0-4edb-b81e-ef6338d60930\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.474605 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbvmz\" (UniqueName: \"kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz\") pod \"ab05d322-74c0-4edb-b81e-ef6338d60930\" (UID: \"ab05d322-74c0-4edb-b81e-ef6338d60930\") " Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.475363 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab05d322-74c0-4edb-b81e-ef6338d60930" (UID: "ab05d322-74c0-4edb-b81e-ef6338d60930"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.475781 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab05d322-74c0-4edb-b81e-ef6338d60930-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.483810 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab05d322-74c0-4edb-b81e-ef6338d60930" (UID: "ab05d322-74c0-4edb-b81e-ef6338d60930"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.484409 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz" (OuterVolumeSpecName: "kube-api-access-nbvmz") pod "ab05d322-74c0-4edb-b81e-ef6338d60930" (UID: "ab05d322-74c0-4edb-b81e-ef6338d60930"). InnerVolumeSpecName "kube-api-access-nbvmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.578612 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab05d322-74c0-4edb-b81e-ef6338d60930-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:03 crc kubenswrapper[4857]: I0318 15:00:03.578664 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbvmz\" (UniqueName: \"kubernetes.io/projected/ab05d322-74c0-4edb-b81e-ef6338d60930-kube-api-access-nbvmz\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:04 crc kubenswrapper[4857]: I0318 15:00:04.036684 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" event={"ID":"ab05d322-74c0-4edb-b81e-ef6338d60930","Type":"ContainerDied","Data":"dbbb4fca3c2bc507f90427c65a4e4bec12dbd7288d85d7c748424a4dad50c8ae"} Mar 18 15:00:04 crc kubenswrapper[4857]: I0318 15:00:04.036812 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbb4fca3c2bc507f90427c65a4e4bec12dbd7288d85d7c748424a4dad50c8ae" Mar 18 15:00:04 crc kubenswrapper[4857]: I0318 15:00:04.036843 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q" Mar 18 15:00:04 crc kubenswrapper[4857]: I0318 15:00:04.565261 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98"] Mar 18 15:00:04 crc kubenswrapper[4857]: I0318 15:00:04.579361 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564055-5qq98"] Mar 18 15:00:05 crc kubenswrapper[4857]: I0318 15:00:05.187596 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852eb59a-14cd-48b7-86ed-d25d1d7f7a09" path="/var/lib/kubelet/pods/852eb59a-14cd-48b7-86ed-d25d1d7f7a09/volumes" Mar 18 15:00:07 crc kubenswrapper[4857]: I0318 15:00:07.084478 4857 generic.go:334] "Generic (PLEG): container finished" podID="b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" containerID="6466c4f3f84ddd06bdf67d98706e7b1be53b4a95dbd2e1becab89cbea40825ca" exitCode=0 Mar 18 15:00:07 crc kubenswrapper[4857]: I0318 15:00:07.084830 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564100-s2j95" event={"ID":"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5","Type":"ContainerDied","Data":"6466c4f3f84ddd06bdf67d98706e7b1be53b4a95dbd2e1becab89cbea40825ca"} Mar 18 15:00:08 crc kubenswrapper[4857]: I0318 15:00:08.758322 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:08 crc kubenswrapper[4857]: I0318 15:00:08.768475 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqcvp\" (UniqueName: \"kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp\") pod \"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5\" (UID: \"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5\") " Mar 18 15:00:08 crc kubenswrapper[4857]: I0318 15:00:08.781732 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp" (OuterVolumeSpecName: "kube-api-access-pqcvp") pod "b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" (UID: "b85b3afd-44d5-4afa-96af-77b9e7f9d2c5"). InnerVolumeSpecName "kube-api-access-pqcvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:00:08 crc kubenswrapper[4857]: I0318 15:00:08.886497 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqcvp\" (UniqueName: \"kubernetes.io/projected/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5-kube-api-access-pqcvp\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:09 crc kubenswrapper[4857]: I0318 15:00:09.110809 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564100-s2j95" Mar 18 15:00:09 crc kubenswrapper[4857]: I0318 15:00:09.110868 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564100-s2j95" event={"ID":"b85b3afd-44d5-4afa-96af-77b9e7f9d2c5","Type":"ContainerDied","Data":"50b3e6e775e6c8e2264628d0fcdc0b539e6af63ef8aa422cfb763e624cea5f1c"} Mar 18 15:00:09 crc kubenswrapper[4857]: I0318 15:00:09.110920 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b3e6e775e6c8e2264628d0fcdc0b539e6af63ef8aa422cfb763e624cea5f1c" Mar 18 15:00:09 crc kubenswrapper[4857]: I0318 15:00:09.852280 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564094-p7bb4"] Mar 18 15:00:09 crc kubenswrapper[4857]: I0318 15:00:09.864735 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564094-p7bb4"] Mar 18 15:00:11 crc kubenswrapper[4857]: I0318 15:00:11.195302 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2" path="/var/lib/kubelet/pods/d398ecfb-ca3a-4f86-ae3d-4179cab2e2a2/volumes" Mar 18 15:00:13 crc kubenswrapper[4857]: I0318 15:00:13.179049 4857 generic.go:334] "Generic (PLEG): container finished" podID="edcb5d9a-650c-4199-89c1-5f077d3f217f" containerID="1a7ae4f9f7f1da800a0a2f2bb73e79efdf0c867934a04b4273f3c7804801ebca" exitCode=0 Mar 18 15:00:13 crc kubenswrapper[4857]: I0318 15:00:13.193657 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" event={"ID":"edcb5d9a-650c-4199-89c1-5f077d3f217f","Type":"ContainerDied","Data":"1a7ae4f9f7f1da800a0a2f2bb73e79efdf0c867934a04b4273f3c7804801ebca"} Mar 18 15:00:14 crc kubenswrapper[4857]: I0318 15:00:14.833528 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.023705 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz8hm\" (UniqueName: \"kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm\") pod \"edcb5d9a-650c-4199-89c1-5f077d3f217f\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.024295 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1\") pod \"edcb5d9a-650c-4199-89c1-5f077d3f217f\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.024345 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0\") pod \"edcb5d9a-650c-4199-89c1-5f077d3f217f\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.024380 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam\") pod \"edcb5d9a-650c-4199-89c1-5f077d3f217f\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.024474 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory\") pod \"edcb5d9a-650c-4199-89c1-5f077d3f217f\" (UID: \"edcb5d9a-650c-4199-89c1-5f077d3f217f\") " Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.044439 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm" (OuterVolumeSpecName: "kube-api-access-kz8hm") pod "edcb5d9a-650c-4199-89c1-5f077d3f217f" (UID: "edcb5d9a-650c-4199-89c1-5f077d3f217f"). InnerVolumeSpecName "kube-api-access-kz8hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.060891 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "edcb5d9a-650c-4199-89c1-5f077d3f217f" (UID: "edcb5d9a-650c-4199-89c1-5f077d3f217f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.070131 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "edcb5d9a-650c-4199-89c1-5f077d3f217f" (UID: "edcb5d9a-650c-4199-89c1-5f077d3f217f"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.090484 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory" (OuterVolumeSpecName: "inventory") pod "edcb5d9a-650c-4199-89c1-5f077d3f217f" (UID: "edcb5d9a-650c-4199-89c1-5f077d3f217f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.091833 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "edcb5d9a-650c-4199-89c1-5f077d3f217f" (UID: "edcb5d9a-650c-4199-89c1-5f077d3f217f"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.129029 4857 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.129068 4857 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.129083 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.129097 4857 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edcb5d9a-650c-4199-89c1-5f077d3f217f-inventory\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.129108 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz8hm\" (UniqueName: \"kubernetes.io/projected/edcb5d9a-650c-4199-89c1-5f077d3f217f-kube-api-access-kz8hm\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.216232 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" event={"ID":"edcb5d9a-650c-4199-89c1-5f077d3f217f","Type":"ContainerDied","Data":"c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c"} Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.216304 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6406a241b5f6b091c3f231c1000ef490adcb09ff19a1c9c41fb72c2d935247c" Mar 18 15:00:15 crc kubenswrapper[4857]: I0318 15:00:15.216385 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-hstgx" Mar 18 15:00:16 crc kubenswrapper[4857]: I0318 15:00:16.767703 4857 scope.go:117] "RemoveContainer" containerID="49c984d649fd6e870af76f201592d11de9c8f403ed252453e755d19e66db3ebc" Mar 18 15:00:16 crc kubenswrapper[4857]: I0318 15:00:16.855145 4857 scope.go:117] "RemoveContainer" containerID="7d587065861c19a1692a67f5854e10cf1b4479642b9de329546d0066363c5da8" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.318924 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:18 crc kubenswrapper[4857]: E0318 15:00:18.320043 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab05d322-74c0-4edb-b81e-ef6338d60930" containerName="collect-profiles" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320076 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab05d322-74c0-4edb-b81e-ef6338d60930" containerName="collect-profiles" Mar 18 15:00:18 crc kubenswrapper[4857]: E0318 15:00:18.320109 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" containerName="oc" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320122 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" containerName="oc" Mar 18 15:00:18 crc kubenswrapper[4857]: E0318 15:00:18.320149 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcb5d9a-650c-4199-89c1-5f077d3f217f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320160 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcb5d9a-650c-4199-89c1-5f077d3f217f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320517 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" containerName="oc" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320542 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab05d322-74c0-4edb-b81e-ef6338d60930" containerName="collect-profiles" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.320565 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcb5d9a-650c-4199-89c1-5f077d3f217f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.323396 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.336030 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.452624 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.452775 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.454263 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mmv\" (UniqueName: \"kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.556601 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.556777 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mmv\" (UniqueName: \"kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.556879 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.557436 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.557808 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.588967 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mmv\" (UniqueName: \"kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv\") pod \"community-operators-tmlk9\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:18 crc kubenswrapper[4857]: I0318 15:00:18.662968 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:19 crc kubenswrapper[4857]: I0318 15:00:19.376431 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:20 crc kubenswrapper[4857]: I0318 15:00:20.340419 4857 generic.go:334] "Generic (PLEG): container finished" podID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerID="eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee" exitCode=0 Mar 18 15:00:20 crc kubenswrapper[4857]: I0318 15:00:20.340555 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerDied","Data":"eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee"} Mar 18 15:00:20 crc kubenswrapper[4857]: I0318 15:00:20.340841 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerStarted","Data":"275aa195fc4a1f92f9c068e11fbb5cf8c2c67b9949bffd3edf30a37f77cfcaf1"} Mar 18 15:00:23 crc kubenswrapper[4857]: I0318 15:00:23.399175 4857 generic.go:334] "Generic (PLEG): container finished" podID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerID="206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd" exitCode=0 Mar 18 15:00:23 crc kubenswrapper[4857]: I0318 15:00:23.399328 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerDied","Data":"206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd"} Mar 18 15:00:28 crc kubenswrapper[4857]: I0318 15:00:28.538454 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerStarted","Data":"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575"} Mar 18 15:00:28 crc kubenswrapper[4857]: I0318 15:00:28.584455 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmlk9" podStartSLOduration=3.147797511 podStartE2EDuration="10.583724196s" podCreationTimestamp="2026-03-18 15:00:18 +0000 UTC" firstStartedPulling="2026-03-18 15:00:20.343293118 +0000 UTC m=+3604.472421595" lastFinishedPulling="2026-03-18 15:00:27.779219833 +0000 UTC m=+3611.908348280" observedRunningTime="2026-03-18 15:00:28.566938173 +0000 UTC m=+3612.696066630" watchObservedRunningTime="2026-03-18 15:00:28.583724196 +0000 UTC m=+3612.712852663" Mar 18 15:00:28 crc kubenswrapper[4857]: I0318 15:00:28.663928 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:28 crc kubenswrapper[4857]: I0318 15:00:28.663993 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:30 crc kubenswrapper[4857]: I0318 15:00:30.003349 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tmlk9" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="registry-server" probeResult="failure" output=< Mar 18 15:00:30 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:00:30 crc kubenswrapper[4857]: > Mar 18 15:00:38 crc kubenswrapper[4857]: I0318 15:00:38.749706 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:38 crc kubenswrapper[4857]: I0318 15:00:38.821947 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:39 crc kubenswrapper[4857]: I0318 15:00:39.001058 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:40 crc kubenswrapper[4857]: I0318 15:00:40.694970 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tmlk9" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="registry-server" containerID="cri-o://76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575" gracePeriod=2 Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.256915 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.421498 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities\") pod \"2a17783d-8506-491c-bb7e-bee12a61bf68\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.421831 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content\") pod \"2a17783d-8506-491c-bb7e-bee12a61bf68\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.421987 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4mmv\" (UniqueName: \"kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv\") pod \"2a17783d-8506-491c-bb7e-bee12a61bf68\" (UID: \"2a17783d-8506-491c-bb7e-bee12a61bf68\") " Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.424297 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities" (OuterVolumeSpecName: "utilities") pod "2a17783d-8506-491c-bb7e-bee12a61bf68" (UID: "2a17783d-8506-491c-bb7e-bee12a61bf68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.429061 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv" (OuterVolumeSpecName: "kube-api-access-z4mmv") pod "2a17783d-8506-491c-bb7e-bee12a61bf68" (UID: "2a17783d-8506-491c-bb7e-bee12a61bf68"). InnerVolumeSpecName "kube-api-access-z4mmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.501672 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a17783d-8506-491c-bb7e-bee12a61bf68" (UID: "2a17783d-8506-491c-bb7e-bee12a61bf68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.525100 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4mmv\" (UniqueName: \"kubernetes.io/projected/2a17783d-8506-491c-bb7e-bee12a61bf68-kube-api-access-z4mmv\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.525142 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.525156 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a17783d-8506-491c-bb7e-bee12a61bf68-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.710708 4857 generic.go:334] "Generic (PLEG): container finished" podID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerID="76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575" exitCode=0 Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.710795 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerDied","Data":"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575"} Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.710846 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmlk9" event={"ID":"2a17783d-8506-491c-bb7e-bee12a61bf68","Type":"ContainerDied","Data":"275aa195fc4a1f92f9c068e11fbb5cf8c2c67b9949bffd3edf30a37f77cfcaf1"} Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.710870 4857 scope.go:117] "RemoveContainer" containerID="76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.710939 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmlk9" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.745049 4857 scope.go:117] "RemoveContainer" containerID="206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.762252 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.776505 4857 scope.go:117] "RemoveContainer" containerID="eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.785814 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tmlk9"] Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.832030 4857 scope.go:117] "RemoveContainer" containerID="76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575" Mar 18 15:00:41 crc kubenswrapper[4857]: E0318 15:00:41.832834 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575\": container with ID starting with 76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575 not found: ID does not exist" containerID="76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.832899 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575"} err="failed to get container status \"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575\": rpc error: code = NotFound desc = could not find container \"76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575\": container with ID starting with 76a6f15ac89bd78eaec0c83943fdd32f2a3a0f86e477a979df3bac3d868d9575 not found: ID does not exist" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.832938 4857 scope.go:117] "RemoveContainer" containerID="206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd" Mar 18 15:00:41 crc kubenswrapper[4857]: E0318 15:00:41.833386 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd\": container with ID starting with 206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd not found: ID does not exist" containerID="206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.833426 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd"} err="failed to get container status \"206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd\": rpc error: code = NotFound desc = could not find container \"206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd\": container with ID starting with 206c94ec4b85266ca3e292a59ec6710b48899b4e3201939b6f4ef662306925bd not found: ID does not exist" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.833456 4857 scope.go:117] "RemoveContainer" containerID="eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee" Mar 18 15:00:41 crc kubenswrapper[4857]: E0318 15:00:41.833848 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee\": container with ID starting with eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee not found: ID does not exist" containerID="eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee" Mar 18 15:00:41 crc kubenswrapper[4857]: I0318 15:00:41.833875 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee"} err="failed to get container status \"eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee\": rpc error: code = NotFound desc = could not find container \"eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee\": container with ID starting with eac3c8c4d2b0a39ff7e70d96e82edf33d35542f45aeb5d1d597062ef1c9cabee not found: ID does not exist" Mar 18 15:00:43 crc kubenswrapper[4857]: I0318 15:00:43.177676 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" path="/var/lib/kubelet/pods/2a17783d-8506-491c-bb7e-bee12a61bf68/volumes" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.615713 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:00:45 crc kubenswrapper[4857]: E0318 15:00:45.619091 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="extract-content" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.619381 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="extract-content" Mar 18 15:00:45 crc kubenswrapper[4857]: E0318 15:00:45.619620 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="registry-server" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.619806 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="registry-server" Mar 18 15:00:45 crc kubenswrapper[4857]: E0318 15:00:45.620014 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="extract-utilities" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.620175 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="extract-utilities" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.621371 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a17783d-8506-491c-bb7e-bee12a61bf68" containerName="registry-server" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.625540 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.627802 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.683299 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.684561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztfc\" (UniqueName: \"kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.684891 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.786938 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.787150 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.787239 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wztfc\" (UniqueName: \"kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.787710 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.788052 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.823788 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wztfc\" (UniqueName: \"kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc\") pod \"redhat-marketplace-65x96\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:45 crc kubenswrapper[4857]: I0318 15:00:45.960245 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:00:46 crc kubenswrapper[4857]: I0318 15:00:46.523420 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:00:46 crc kubenswrapper[4857]: I0318 15:00:46.780085 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerStarted","Data":"7ebb17f4ded7a99bc4d74125231810763e142fb4a62b73dc0bdfc740a8ac70f3"} Mar 18 15:00:47 crc kubenswrapper[4857]: I0318 15:00:47.794099 4857 generic.go:334] "Generic (PLEG): container finished" podID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerID="8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf" exitCode=0 Mar 18 15:00:47 crc kubenswrapper[4857]: I0318 15:00:47.794155 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerDied","Data":"8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf"} Mar 18 15:00:50 crc kubenswrapper[4857]: I0318 15:00:50.863475 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerStarted","Data":"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61"} Mar 18 15:00:51 crc kubenswrapper[4857]: I0318 15:00:51.877665 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerDied","Data":"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61"} Mar 18 15:00:51 crc kubenswrapper[4857]: I0318 15:00:51.877487 4857 generic.go:334] "Generic (PLEG): container finished" podID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerID="3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61" exitCode=0 Mar 18 15:00:53 crc kubenswrapper[4857]: E0318 15:00:53.131499 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:b856e4d37af238240aaa3504ebf72881a05d3e5875365377d4fbd3a313fe7d06" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Mar 18 15:00:53 crc kubenswrapper[4857]: E0318 15:00:53.131937 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wztfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-65x96_openshift-marketplace(c5f56919-ed99-4c89-bb5d-5a66e062a776): ErrImagePull: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:b856e4d37af238240aaa3504ebf72881a05d3e5875365377d4fbd3a313fe7d06" logger="UnhandledError" Mar 18 15:00:53 crc kubenswrapper[4857]: E0318 15:00:53.133233 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:b856e4d37af238240aaa3504ebf72881a05d3e5875365377d4fbd3a313fe7d06\"" pod="openshift-marketplace/redhat-marketplace-65x96" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" Mar 18 15:00:53 crc kubenswrapper[4857]: E0318 15:00:53.911496 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-65x96" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.164719 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29564101-wqh7w"] Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.167745 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.207871 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29564101-wqh7w"] Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.238641 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.239351 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.239495 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.239655 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.341889 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.342035 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.342120 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.342206 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.349947 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.350212 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.356653 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.370393 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5\") pod \"keystone-cron-29564101-wqh7w\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:00 crc kubenswrapper[4857]: I0318 15:01:00.537279 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:01 crc kubenswrapper[4857]: I0318 15:01:01.047686 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29564101-wqh7w"] Mar 18 15:01:02 crc kubenswrapper[4857]: I0318 15:01:02.014315 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29564101-wqh7w" event={"ID":"a3189582-ce5c-4457-b558-181d14d1e6e8","Type":"ContainerStarted","Data":"8ad5715258ad5f8d28e3998befd717473a1313522228d84599be6b916f2c94dd"} Mar 18 15:01:02 crc kubenswrapper[4857]: I0318 15:01:02.014913 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29564101-wqh7w" event={"ID":"a3189582-ce5c-4457-b558-181d14d1e6e8","Type":"ContainerStarted","Data":"9cf7a48c5c75e4a4840bf8c2cd9ee3b635e302262b48de177b43e220443c3031"} Mar 18 15:01:02 crc kubenswrapper[4857]: I0318 15:01:02.045408 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29564101-wqh7w" podStartSLOduration=2.045387671 podStartE2EDuration="2.045387671s" podCreationTimestamp="2026-03-18 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 15:01:02.035610005 +0000 UTC m=+3646.164738462" watchObservedRunningTime="2026-03-18 15:01:02.045387671 +0000 UTC m=+3646.174516128" Mar 18 15:01:05 crc kubenswrapper[4857]: I0318 15:01:05.168800 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:01:07 crc kubenswrapper[4857]: I0318 15:01:07.079694 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerStarted","Data":"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34"} Mar 18 15:01:07 crc kubenswrapper[4857]: I0318 15:01:07.081694 4857 generic.go:334] "Generic (PLEG): container finished" podID="a3189582-ce5c-4457-b558-181d14d1e6e8" containerID="8ad5715258ad5f8d28e3998befd717473a1313522228d84599be6b916f2c94dd" exitCode=0 Mar 18 15:01:07 crc kubenswrapper[4857]: I0318 15:01:07.081767 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29564101-wqh7w" event={"ID":"a3189582-ce5c-4457-b558-181d14d1e6e8","Type":"ContainerDied","Data":"8ad5715258ad5f8d28e3998befd717473a1313522228d84599be6b916f2c94dd"} Mar 18 15:01:07 crc kubenswrapper[4857]: I0318 15:01:07.130341 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-65x96" podStartSLOduration=3.549721141 podStartE2EDuration="22.130310476s" podCreationTimestamp="2026-03-18 15:00:45 +0000 UTC" firstStartedPulling="2026-03-18 15:00:47.796556034 +0000 UTC m=+3631.925684491" lastFinishedPulling="2026-03-18 15:01:06.377145369 +0000 UTC m=+3650.506273826" observedRunningTime="2026-03-18 15:01:07.105695135 +0000 UTC m=+3651.234823622" watchObservedRunningTime="2026-03-18 15:01:07.130310476 +0000 UTC m=+3651.259438923" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.473484 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.620553 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle\") pod \"a3189582-ce5c-4457-b558-181d14d1e6e8\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.620705 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys\") pod \"a3189582-ce5c-4457-b558-181d14d1e6e8\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.620981 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5\") pod \"a3189582-ce5c-4457-b558-181d14d1e6e8\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.621072 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data\") pod \"a3189582-ce5c-4457-b558-181d14d1e6e8\" (UID: \"a3189582-ce5c-4457-b558-181d14d1e6e8\") " Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.628023 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5" (OuterVolumeSpecName: "kube-api-access-c9jx5") pod "a3189582-ce5c-4457-b558-181d14d1e6e8" (UID: "a3189582-ce5c-4457-b558-181d14d1e6e8"). InnerVolumeSpecName "kube-api-access-c9jx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.628149 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a3189582-ce5c-4457-b558-181d14d1e6e8" (UID: "a3189582-ce5c-4457-b558-181d14d1e6e8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.666904 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3189582-ce5c-4457-b558-181d14d1e6e8" (UID: "a3189582-ce5c-4457-b558-181d14d1e6e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.697987 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data" (OuterVolumeSpecName: "config-data") pod "a3189582-ce5c-4457-b558-181d14d1e6e8" (UID: "a3189582-ce5c-4457-b558-181d14d1e6e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.725145 4857 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.725211 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9jx5\" (UniqueName: \"kubernetes.io/projected/a3189582-ce5c-4457-b558-181d14d1e6e8-kube-api-access-c9jx5\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.725235 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:08 crc kubenswrapper[4857]: I0318 15:01:08.725260 4857 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3189582-ce5c-4457-b558-181d14d1e6e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:09 crc kubenswrapper[4857]: I0318 15:01:09.115472 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29564101-wqh7w" event={"ID":"a3189582-ce5c-4457-b558-181d14d1e6e8","Type":"ContainerDied","Data":"9cf7a48c5c75e4a4840bf8c2cd9ee3b635e302262b48de177b43e220443c3031"} Mar 18 15:01:09 crc kubenswrapper[4857]: I0318 15:01:09.115910 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf7a48c5c75e4a4840bf8c2cd9ee3b635e302262b48de177b43e220443c3031" Mar 18 15:01:09 crc kubenswrapper[4857]: I0318 15:01:09.115522 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29564101-wqh7w" Mar 18 15:01:15 crc kubenswrapper[4857]: I0318 15:01:15.960616 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:15 crc kubenswrapper[4857]: I0318 15:01:15.960997 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:16 crc kubenswrapper[4857]: I0318 15:01:16.041679 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:16 crc kubenswrapper[4857]: I0318 15:01:16.264363 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:16 crc kubenswrapper[4857]: I0318 15:01:16.815455 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.235507 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-65x96" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="registry-server" containerID="cri-o://b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34" gracePeriod=2 Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.798397 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.984921 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities\") pod \"c5f56919-ed99-4c89-bb5d-5a66e062a776\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.985201 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wztfc\" (UniqueName: \"kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc\") pod \"c5f56919-ed99-4c89-bb5d-5a66e062a776\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.985268 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content\") pod \"c5f56919-ed99-4c89-bb5d-5a66e062a776\" (UID: \"c5f56919-ed99-4c89-bb5d-5a66e062a776\") " Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.986003 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities" (OuterVolumeSpecName: "utilities") pod "c5f56919-ed99-4c89-bb5d-5a66e062a776" (UID: "c5f56919-ed99-4c89-bb5d-5a66e062a776"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:01:18 crc kubenswrapper[4857]: I0318 15:01:18.995046 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc" (OuterVolumeSpecName: "kube-api-access-wztfc") pod "c5f56919-ed99-4c89-bb5d-5a66e062a776" (UID: "c5f56919-ed99-4c89-bb5d-5a66e062a776"). InnerVolumeSpecName "kube-api-access-wztfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.066183 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5f56919-ed99-4c89-bb5d-5a66e062a776" (UID: "c5f56919-ed99-4c89-bb5d-5a66e062a776"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.088608 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wztfc\" (UniqueName: \"kubernetes.io/projected/c5f56919-ed99-4c89-bb5d-5a66e062a776-kube-api-access-wztfc\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.088884 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.088968 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f56919-ed99-4c89-bb5d-5a66e062a776-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.249965 4857 generic.go:334] "Generic (PLEG): container finished" podID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerID="b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34" exitCode=0 Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.250023 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65x96" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.250028 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerDied","Data":"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34"} Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.250059 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65x96" event={"ID":"c5f56919-ed99-4c89-bb5d-5a66e062a776","Type":"ContainerDied","Data":"7ebb17f4ded7a99bc4d74125231810763e142fb4a62b73dc0bdfc740a8ac70f3"} Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.250097 4857 scope.go:117] "RemoveContainer" containerID="b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.280149 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.284362 4857 scope.go:117] "RemoveContainer" containerID="3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.291772 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-65x96"] Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.311644 4857 scope.go:117] "RemoveContainer" containerID="8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.374973 4857 scope.go:117] "RemoveContainer" containerID="b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34" Mar 18 15:01:19 crc kubenswrapper[4857]: E0318 15:01:19.376422 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34\": container with ID starting with b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34 not found: ID does not exist" containerID="b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.376463 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34"} err="failed to get container status \"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34\": rpc error: code = NotFound desc = could not find container \"b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34\": container with ID starting with b1058b22b9a84cae89826434f51fc0a1c0ac15b0487487a5cb5c2f65d04a5a34 not found: ID does not exist" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.376493 4857 scope.go:117] "RemoveContainer" containerID="3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61" Mar 18 15:01:19 crc kubenswrapper[4857]: E0318 15:01:19.378064 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61\": container with ID starting with 3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61 not found: ID does not exist" containerID="3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.378093 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61"} err="failed to get container status \"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61\": rpc error: code = NotFound desc = could not find container \"3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61\": container with ID starting with 3bae7550d01c156074eabf684fc8e197faa39d0744544918598891b646e29a61 not found: ID does not exist" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.378116 4857 scope.go:117] "RemoveContainer" containerID="8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf" Mar 18 15:01:19 crc kubenswrapper[4857]: E0318 15:01:19.378525 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf\": container with ID starting with 8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf not found: ID does not exist" containerID="8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf" Mar 18 15:01:19 crc kubenswrapper[4857]: I0318 15:01:19.378559 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf"} err="failed to get container status \"8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf\": rpc error: code = NotFound desc = could not find container \"8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf\": container with ID starting with 8a1fe9423ab5658a407f195176a5c37ec429fde31fe59cb57e7e75db6188a0bf not found: ID does not exist" Mar 18 15:01:21 crc kubenswrapper[4857]: I0318 15:01:21.179042 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" path="/var/lib/kubelet/pods/c5f56919-ed99-4c89-bb5d-5a66e062a776/volumes" Mar 18 15:01:27 crc kubenswrapper[4857]: I0318 15:01:27.039256 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:01:27 crc kubenswrapper[4857]: I0318 15:01:27.039916 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.013173 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:43 crc kubenswrapper[4857]: E0318 15:01:43.015195 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="registry-server" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.015228 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="registry-server" Mar 18 15:01:43 crc kubenswrapper[4857]: E0318 15:01:43.015298 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="extract-content" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.015317 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="extract-content" Mar 18 15:01:43 crc kubenswrapper[4857]: E0318 15:01:43.015419 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="extract-utilities" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.015446 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="extract-utilities" Mar 18 15:01:43 crc kubenswrapper[4857]: E0318 15:01:43.015493 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3189582-ce5c-4457-b558-181d14d1e6e8" containerName="keystone-cron" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.015513 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3189582-ce5c-4457-b558-181d14d1e6e8" containerName="keystone-cron" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.016073 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3189582-ce5c-4457-b558-181d14d1e6e8" containerName="keystone-cron" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.016137 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f56919-ed99-4c89-bb5d-5a66e062a776" containerName="registry-server" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.028169 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.030835 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.114131 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.114285 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrpq2\" (UniqueName: \"kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.114917 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.217904 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.218015 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.218116 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrpq2\" (UniqueName: \"kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.218898 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.219107 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.240007 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrpq2\" (UniqueName: \"kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2\") pod \"certified-operators-g6jmt\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.359929 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:43 crc kubenswrapper[4857]: I0318 15:01:43.961871 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:44 crc kubenswrapper[4857]: I0318 15:01:44.580486 4857 generic.go:334] "Generic (PLEG): container finished" podID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerID="9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad" exitCode=0 Mar 18 15:01:44 crc kubenswrapper[4857]: I0318 15:01:44.580545 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerDied","Data":"9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad"} Mar 18 15:01:44 crc kubenswrapper[4857]: I0318 15:01:44.580582 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerStarted","Data":"8d0542913a88b255c81138606c5873bd719b7365155c454de2de4a4293c84a35"} Mar 18 15:01:46 crc kubenswrapper[4857]: I0318 15:01:46.605556 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerStarted","Data":"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3"} Mar 18 15:01:47 crc kubenswrapper[4857]: I0318 15:01:47.621492 4857 generic.go:334] "Generic (PLEG): container finished" podID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerID="9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3" exitCode=0 Mar 18 15:01:47 crc kubenswrapper[4857]: I0318 15:01:47.621547 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerDied","Data":"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3"} Mar 18 15:01:48 crc kubenswrapper[4857]: I0318 15:01:48.641500 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerStarted","Data":"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae"} Mar 18 15:01:48 crc kubenswrapper[4857]: I0318 15:01:48.670283 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6jmt" podStartSLOduration=3.15911208 podStartE2EDuration="6.670263528s" podCreationTimestamp="2026-03-18 15:01:42 +0000 UTC" firstStartedPulling="2026-03-18 15:01:44.58297425 +0000 UTC m=+3688.712102717" lastFinishedPulling="2026-03-18 15:01:48.094125708 +0000 UTC m=+3692.223254165" observedRunningTime="2026-03-18 15:01:48.664540773 +0000 UTC m=+3692.793669270" watchObservedRunningTime="2026-03-18 15:01:48.670263528 +0000 UTC m=+3692.799391985" Mar 18 15:01:53 crc kubenswrapper[4857]: I0318 15:01:53.360127 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:53 crc kubenswrapper[4857]: I0318 15:01:53.361611 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:53 crc kubenswrapper[4857]: I0318 15:01:53.415873 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:53 crc kubenswrapper[4857]: I0318 15:01:53.768834 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:53 crc kubenswrapper[4857]: I0318 15:01:53.857433 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:55 crc kubenswrapper[4857]: I0318 15:01:55.742698 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6jmt" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="registry-server" containerID="cri-o://d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae" gracePeriod=2 Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.313911 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.376961 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities\") pod \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.377800 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrpq2\" (UniqueName: \"kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2\") pod \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.377951 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content\") pod \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\" (UID: \"175a6ee7-5fbb-4920-8c8f-47e35ba83d61\") " Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.378213 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities" (OuterVolumeSpecName: "utilities") pod "175a6ee7-5fbb-4920-8c8f-47e35ba83d61" (UID: "175a6ee7-5fbb-4920-8c8f-47e35ba83d61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.379734 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.406093 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2" (OuterVolumeSpecName: "kube-api-access-rrpq2") pod "175a6ee7-5fbb-4920-8c8f-47e35ba83d61" (UID: "175a6ee7-5fbb-4920-8c8f-47e35ba83d61"). InnerVolumeSpecName "kube-api-access-rrpq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.441383 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "175a6ee7-5fbb-4920-8c8f-47e35ba83d61" (UID: "175a6ee7-5fbb-4920-8c8f-47e35ba83d61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.484028 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.484098 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrpq2\" (UniqueName: \"kubernetes.io/projected/175a6ee7-5fbb-4920-8c8f-47e35ba83d61-kube-api-access-rrpq2\") on node \"crc\" DevicePath \"\"" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.861674 4857 generic.go:334] "Generic (PLEG): container finished" podID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerID="d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae" exitCode=0 Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.861744 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerDied","Data":"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae"} Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.861789 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6jmt" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.861825 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6jmt" event={"ID":"175a6ee7-5fbb-4920-8c8f-47e35ba83d61","Type":"ContainerDied","Data":"8d0542913a88b255c81138606c5873bd719b7365155c454de2de4a4293c84a35"} Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.861851 4857 scope.go:117] "RemoveContainer" containerID="d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.901233 4857 scope.go:117] "RemoveContainer" containerID="9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3" Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.923834 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.934908 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6jmt"] Mar 18 15:01:56 crc kubenswrapper[4857]: I0318 15:01:56.968243 4857 scope.go:117] "RemoveContainer" containerID="9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.021294 4857 scope.go:117] "RemoveContainer" containerID="d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae" Mar 18 15:01:57 crc kubenswrapper[4857]: E0318 15:01:57.022146 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae\": container with ID starting with d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae not found: ID does not exist" containerID="d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.022473 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae"} err="failed to get container status \"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae\": rpc error: code = NotFound desc = could not find container \"d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae\": container with ID starting with d42abf565a1ca5b0938451197ae3e2b232aec1576653a57a11c282262e6d82ae not found: ID does not exist" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.022513 4857 scope.go:117] "RemoveContainer" containerID="9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3" Mar 18 15:01:57 crc kubenswrapper[4857]: E0318 15:01:57.022938 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3\": container with ID starting with 9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3 not found: ID does not exist" containerID="9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.022975 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3"} err="failed to get container status \"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3\": rpc error: code = NotFound desc = could not find container \"9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3\": container with ID starting with 9d0d9f51c76ccd54ba82e664251e3e49b4ddfabf533b35974dad4d2644267ba3 not found: ID does not exist" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.023000 4857 scope.go:117] "RemoveContainer" containerID="9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad" Mar 18 15:01:57 crc kubenswrapper[4857]: E0318 15:01:57.023618 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad\": container with ID starting with 9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad not found: ID does not exist" containerID="9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.023667 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad"} err="failed to get container status \"9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad\": rpc error: code = NotFound desc = could not find container \"9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad\": container with ID starting with 9436558621d1a3b545ad509a5883c0b76d76bb9d2c461da615e0136589ac9dad not found: ID does not exist" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.044973 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.045081 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:01:57 crc kubenswrapper[4857]: I0318 15:01:57.203496 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" path="/var/lib/kubelet/pods/175a6ee7-5fbb-4920-8c8f-47e35ba83d61/volumes" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.168385 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564102-88xh6"] Mar 18 15:02:00 crc kubenswrapper[4857]: E0318 15:02:00.169529 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="extract-utilities" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.169550 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="extract-utilities" Mar 18 15:02:00 crc kubenswrapper[4857]: E0318 15:02:00.169557 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="extract-content" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.169564 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="extract-content" Mar 18 15:02:00 crc kubenswrapper[4857]: E0318 15:02:00.169634 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="registry-server" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.169640 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="registry-server" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.169938 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="175a6ee7-5fbb-4920-8c8f-47e35ba83d61" containerName="registry-server" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.170943 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.174489 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.174494 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.174993 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.181307 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564102-88xh6"] Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.362897 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w94kp\" (UniqueName: \"kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp\") pod \"auto-csr-approver-29564102-88xh6\" (UID: \"302de913-9b18-4b76-b971-8f7b3f76430e\") " pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.465336 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w94kp\" (UniqueName: \"kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp\") pod \"auto-csr-approver-29564102-88xh6\" (UID: \"302de913-9b18-4b76-b971-8f7b3f76430e\") " pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.508775 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w94kp\" (UniqueName: \"kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp\") pod \"auto-csr-approver-29564102-88xh6\" (UID: \"302de913-9b18-4b76-b971-8f7b3f76430e\") " pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:00 crc kubenswrapper[4857]: I0318 15:02:00.798678 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:01 crc kubenswrapper[4857]: I0318 15:02:01.358192 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564102-88xh6"] Mar 18 15:02:02 crc kubenswrapper[4857]: I0318 15:02:02.079003 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564102-88xh6" event={"ID":"302de913-9b18-4b76-b971-8f7b3f76430e","Type":"ContainerStarted","Data":"a592fc9aa707401d54ac319ccacacc7daf8c5f7b4b3e7859bfaf3a29aad136e0"} Mar 18 15:02:10 crc kubenswrapper[4857]: I0318 15:02:10.739352 4857 generic.go:334] "Generic (PLEG): container finished" podID="302de913-9b18-4b76-b971-8f7b3f76430e" containerID="43225d60309af4b214168340db6203039b78babfb0a5c898d0224f70679dd9f3" exitCode=0 Mar 18 15:02:10 crc kubenswrapper[4857]: I0318 15:02:10.739449 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564102-88xh6" event={"ID":"302de913-9b18-4b76-b971-8f7b3f76430e","Type":"ContainerDied","Data":"43225d60309af4b214168340db6203039b78babfb0a5c898d0224f70679dd9f3"} Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.257548 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.324032 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94kp\" (UniqueName: \"kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp\") pod \"302de913-9b18-4b76-b971-8f7b3f76430e\" (UID: \"302de913-9b18-4b76-b971-8f7b3f76430e\") " Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.334580 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp" (OuterVolumeSpecName: "kube-api-access-w94kp") pod "302de913-9b18-4b76-b971-8f7b3f76430e" (UID: "302de913-9b18-4b76-b971-8f7b3f76430e"). InnerVolumeSpecName "kube-api-access-w94kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.428344 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w94kp\" (UniqueName: \"kubernetes.io/projected/302de913-9b18-4b76-b971-8f7b3f76430e-kube-api-access-w94kp\") on node \"crc\" DevicePath \"\"" Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.805174 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564102-88xh6" event={"ID":"302de913-9b18-4b76-b971-8f7b3f76430e","Type":"ContainerDied","Data":"a592fc9aa707401d54ac319ccacacc7daf8c5f7b4b3e7859bfaf3a29aad136e0"} Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.805229 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a592fc9aa707401d54ac319ccacacc7daf8c5f7b4b3e7859bfaf3a29aad136e0" Mar 18 15:02:12 crc kubenswrapper[4857]: I0318 15:02:12.805241 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564102-88xh6" Mar 18 15:02:13 crc kubenswrapper[4857]: I0318 15:02:13.390905 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564096-srzqb"] Mar 18 15:02:13 crc kubenswrapper[4857]: I0318 15:02:13.402168 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564096-srzqb"] Mar 18 15:02:15 crc kubenswrapper[4857]: I0318 15:02:15.183466 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2d6530-cf73-400d-a128-6e22c36e1098" path="/var/lib/kubelet/pods/ae2d6530-cf73-400d-a128-6e22c36e1098/volumes" Mar 18 15:02:16 crc kubenswrapper[4857]: I0318 15:02:16.994985 4857 scope.go:117] "RemoveContainer" containerID="c1726e8879537751e25b8d85bb24876c6b46609755de5b8808bb7dee9af4c4cb" Mar 18 15:02:27 crc kubenswrapper[4857]: I0318 15:02:27.038478 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:02:27 crc kubenswrapper[4857]: I0318 15:02:27.039181 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:02:27 crc kubenswrapper[4857]: I0318 15:02:27.039253 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:02:27 crc kubenswrapper[4857]: I0318 15:02:27.040804 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:02:27 crc kubenswrapper[4857]: I0318 15:02:27.040896 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" gracePeriod=600 Mar 18 15:02:27 crc kubenswrapper[4857]: E0318 15:02:27.181887 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:02:28 crc kubenswrapper[4857]: I0318 15:02:28.110359 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" exitCode=0 Mar 18 15:02:28 crc kubenswrapper[4857]: I0318 15:02:28.110458 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec"} Mar 18 15:02:28 crc kubenswrapper[4857]: I0318 15:02:28.110568 4857 scope.go:117] "RemoveContainer" containerID="e78d2fb36e6784c2b8ac5d24ffb571588f545f5e98998f4c07cea193d7332a71" Mar 18 15:02:28 crc kubenswrapper[4857]: I0318 15:02:28.112070 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:02:28 crc kubenswrapper[4857]: E0318 15:02:28.112692 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:02:42 crc kubenswrapper[4857]: I0318 15:02:42.255046 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:02:42 crc kubenswrapper[4857]: E0318 15:02:42.255861 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:02:53 crc kubenswrapper[4857]: I0318 15:02:53.163785 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:02:53 crc kubenswrapper[4857]: E0318 15:02:53.164828 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:03:08 crc kubenswrapper[4857]: I0318 15:03:08.164450 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:03:08 crc kubenswrapper[4857]: E0318 15:03:08.165552 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:03:20 crc kubenswrapper[4857]: I0318 15:03:20.785023 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:03:23 crc kubenswrapper[4857]: I0318 15:03:23.168148 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:03:23 crc kubenswrapper[4857]: E0318 15:03:23.170134 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:03:36 crc kubenswrapper[4857]: I0318 15:03:36.164340 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:03:36 crc kubenswrapper[4857]: E0318 15:03:36.166359 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:03:50 crc kubenswrapper[4857]: I0318 15:03:50.163703 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:03:50 crc kubenswrapper[4857]: E0318 15:03:50.164976 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.159784 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564104-6kqb4"] Mar 18 15:04:00 crc kubenswrapper[4857]: E0318 15:04:00.161120 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302de913-9b18-4b76-b971-8f7b3f76430e" containerName="oc" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.161143 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="302de913-9b18-4b76-b971-8f7b3f76430e" containerName="oc" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.161459 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="302de913-9b18-4b76-b971-8f7b3f76430e" containerName="oc" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.162823 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.166970 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.166996 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.167418 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.183199 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564104-6kqb4"] Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.572616 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v25s\" (UniqueName: \"kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s\") pod \"auto-csr-approver-29564104-6kqb4\" (UID: \"0d645174-0aac-4db9-9091-c80f50fb218b\") " pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.676700 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v25s\" (UniqueName: \"kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s\") pod \"auto-csr-approver-29564104-6kqb4\" (UID: \"0d645174-0aac-4db9-9091-c80f50fb218b\") " pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.700705 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v25s\" (UniqueName: \"kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s\") pod \"auto-csr-approver-29564104-6kqb4\" (UID: \"0d645174-0aac-4db9-9091-c80f50fb218b\") " pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:00 crc kubenswrapper[4857]: I0318 15:04:00.757356 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:01 crc kubenswrapper[4857]: W0318 15:04:01.302392 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d645174_0aac_4db9_9091_c80f50fb218b.slice/crio-6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75 WatchSource:0}: Error finding container 6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75: Status 404 returned error can't find the container with id 6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75 Mar 18 15:04:01 crc kubenswrapper[4857]: I0318 15:04:01.323408 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564104-6kqb4"] Mar 18 15:04:01 crc kubenswrapper[4857]: I0318 15:04:01.561178 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" event={"ID":"0d645174-0aac-4db9-9091-c80f50fb218b","Type":"ContainerStarted","Data":"6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75"} Mar 18 15:04:03 crc kubenswrapper[4857]: I0318 15:04:03.164198 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:04:03 crc kubenswrapper[4857]: E0318 15:04:03.165273 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:04:07 crc kubenswrapper[4857]: I0318 15:04:07.848262 4857 generic.go:334] "Generic (PLEG): container finished" podID="0d645174-0aac-4db9-9091-c80f50fb218b" containerID="6006edde95c6f34b175f10f2f5d6a48af6251888bb42df274eb3c7b860468d02" exitCode=0 Mar 18 15:04:07 crc kubenswrapper[4857]: I0318 15:04:07.848379 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" event={"ID":"0d645174-0aac-4db9-9091-c80f50fb218b","Type":"ContainerDied","Data":"6006edde95c6f34b175f10f2f5d6a48af6251888bb42df274eb3c7b860468d02"} Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.725422 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.778640 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v25s\" (UniqueName: \"kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s\") pod \"0d645174-0aac-4db9-9091-c80f50fb218b\" (UID: \"0d645174-0aac-4db9-9091-c80f50fb218b\") " Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.785685 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s" (OuterVolumeSpecName: "kube-api-access-4v25s") pod "0d645174-0aac-4db9-9091-c80f50fb218b" (UID: "0d645174-0aac-4db9-9091-c80f50fb218b"). InnerVolumeSpecName "kube-api-access-4v25s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.878141 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" event={"ID":"0d645174-0aac-4db9-9091-c80f50fb218b","Type":"ContainerDied","Data":"6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75"} Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.878216 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6693983ef0452a3a9f22133ad5d9e30871f1c143caa841c9eaa9653e40aa75" Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.878253 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564104-6kqb4" Mar 18 15:04:09 crc kubenswrapper[4857]: I0318 15:04:09.883029 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v25s\" (UniqueName: \"kubernetes.io/projected/0d645174-0aac-4db9-9091-c80f50fb218b-kube-api-access-4v25s\") on node \"crc\" DevicePath \"\"" Mar 18 15:04:10 crc kubenswrapper[4857]: I0318 15:04:10.841509 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564098-72plm"] Mar 18 15:04:10 crc kubenswrapper[4857]: I0318 15:04:10.866278 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564098-72plm"] Mar 18 15:04:11 crc kubenswrapper[4857]: I0318 15:04:11.177422 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9242b541-2ecc-48ba-b447-00fee2a3b85c" path="/var/lib/kubelet/pods/9242b541-2ecc-48ba-b447-00fee2a3b85c/volumes" Mar 18 15:04:18 crc kubenswrapper[4857]: I0318 15:04:18.165314 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:04:18 crc kubenswrapper[4857]: E0318 15:04:18.166532 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:04:31 crc kubenswrapper[4857]: I0318 15:04:31.163918 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:04:31 crc kubenswrapper[4857]: E0318 15:04:31.164792 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:04:45 crc kubenswrapper[4857]: I0318 15:04:45.166260 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:04:45 crc kubenswrapper[4857]: E0318 15:04:45.167626 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:04:59 crc kubenswrapper[4857]: I0318 15:04:59.164856 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:04:59 crc kubenswrapper[4857]: E0318 15:04:59.165596 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:05:10 crc kubenswrapper[4857]: I0318 15:05:10.828030 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:05:10 crc kubenswrapper[4857]: I0318 15:05:10.828058 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:05:14 crc kubenswrapper[4857]: I0318 15:05:14.164372 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:05:14 crc kubenswrapper[4857]: E0318 15:05:14.165309 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:05:17 crc kubenswrapper[4857]: I0318 15:05:17.202209 4857 scope.go:117] "RemoveContainer" containerID="568d7d1f73077dd93d51e4c7e68de24b7fbf1b8e970831b1b76c61d308581b5d" Mar 18 15:05:25 crc kubenswrapper[4857]: I0318 15:05:25.164127 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:05:25 crc kubenswrapper[4857]: E0318 15:05:25.165073 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:05:36 crc kubenswrapper[4857]: I0318 15:05:36.165275 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:05:36 crc kubenswrapper[4857]: E0318 15:05:36.166161 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:05:47 crc kubenswrapper[4857]: I0318 15:05:47.181807 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:05:47 crc kubenswrapper[4857]: E0318 15:05:47.182987 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.324007 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.324170 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564106-4pddx"] Mar 18 15:06:00 crc kubenswrapper[4857]: E0318 15:06:00.324949 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:06:00 crc kubenswrapper[4857]: E0318 15:06:00.325293 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d645174-0aac-4db9-9091-c80f50fb218b" containerName="oc" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.325312 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d645174-0aac-4db9-9091-c80f50fb218b" containerName="oc" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.325934 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d645174-0aac-4db9-9091-c80f50fb218b" containerName="oc" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.327012 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.329275 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.329945 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.330006 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.341158 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564106-4pddx"] Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.420127 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qp68\" (UniqueName: \"kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68\") pod \"auto-csr-approver-29564106-4pddx\" (UID: \"5208021e-1875-4d69-aeb4-912c0aed0c21\") " pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.523382 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qp68\" (UniqueName: \"kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68\") pod \"auto-csr-approver-29564106-4pddx\" (UID: \"5208021e-1875-4d69-aeb4-912c0aed0c21\") " pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.556802 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qp68\" (UniqueName: \"kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68\") pod \"auto-csr-approver-29564106-4pddx\" (UID: \"5208021e-1875-4d69-aeb4-912c0aed0c21\") " pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:00 crc kubenswrapper[4857]: I0318 15:06:00.659333 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:01 crc kubenswrapper[4857]: I0318 15:06:01.312172 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564106-4pddx"] Mar 18 15:06:01 crc kubenswrapper[4857]: W0318 15:06:01.317170 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5208021e_1875_4d69_aeb4_912c0aed0c21.slice/crio-f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1 WatchSource:0}: Error finding container f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1: Status 404 returned error can't find the container with id f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1 Mar 18 15:06:02 crc kubenswrapper[4857]: I0318 15:06:02.178334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564106-4pddx" event={"ID":"5208021e-1875-4d69-aeb4-912c0aed0c21","Type":"ContainerStarted","Data":"f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1"} Mar 18 15:06:04 crc kubenswrapper[4857]: I0318 15:06:04.227087 4857 generic.go:334] "Generic (PLEG): container finished" podID="5208021e-1875-4d69-aeb4-912c0aed0c21" containerID="19a3c0b00067291a95a4baea6cb8b34e7ec969430339fcec38ac236ca3567fe8" exitCode=0 Mar 18 15:06:04 crc kubenswrapper[4857]: I0318 15:06:04.227507 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564106-4pddx" event={"ID":"5208021e-1875-4d69-aeb4-912c0aed0c21","Type":"ContainerDied","Data":"19a3c0b00067291a95a4baea6cb8b34e7ec969430339fcec38ac236ca3567fe8"} Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.038393 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.113644 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qp68\" (UniqueName: \"kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68\") pod \"5208021e-1875-4d69-aeb4-912c0aed0c21\" (UID: \"5208021e-1875-4d69-aeb4-912c0aed0c21\") " Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.122970 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68" (OuterVolumeSpecName: "kube-api-access-2qp68") pod "5208021e-1875-4d69-aeb4-912c0aed0c21" (UID: "5208021e-1875-4d69-aeb4-912c0aed0c21"). InnerVolumeSpecName "kube-api-access-2qp68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.218662 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qp68\" (UniqueName: \"kubernetes.io/projected/5208021e-1875-4d69-aeb4-912c0aed0c21-kube-api-access-2qp68\") on node \"crc\" DevicePath \"\"" Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.254083 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564106-4pddx" event={"ID":"5208021e-1875-4d69-aeb4-912c0aed0c21","Type":"ContainerDied","Data":"f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1"} Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.254143 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b0e504dccce959867549ba40d28782dda4e276dc161197a27e2c6e0614e3b1" Mar 18 15:06:06 crc kubenswrapper[4857]: I0318 15:06:06.254195 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564106-4pddx" Mar 18 15:06:07 crc kubenswrapper[4857]: I0318 15:06:07.152156 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564100-s2j95"] Mar 18 15:06:07 crc kubenswrapper[4857]: I0318 15:06:07.203849 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564100-s2j95"] Mar 18 15:06:09 crc kubenswrapper[4857]: I0318 15:06:09.190939 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b85b3afd-44d5-4afa-96af-77b9e7f9d2c5" path="/var/lib/kubelet/pods/b85b3afd-44d5-4afa-96af-77b9e7f9d2c5/volumes" Mar 18 15:06:12 crc kubenswrapper[4857]: I0318 15:06:12.164405 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:06:12 crc kubenswrapper[4857]: E0318 15:06:12.165502 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:06:17 crc kubenswrapper[4857]: I0318 15:06:17.316726 4857 scope.go:117] "RemoveContainer" containerID="6466c4f3f84ddd06bdf67d98706e7b1be53b4a95dbd2e1becab89cbea40825ca" Mar 18 15:06:24 crc kubenswrapper[4857]: I0318 15:06:24.164189 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:06:24 crc kubenswrapper[4857]: E0318 15:06:24.165320 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:06:37 crc kubenswrapper[4857]: I0318 15:06:37.171926 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:06:37 crc kubenswrapper[4857]: E0318 15:06:37.173083 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:06:52 crc kubenswrapper[4857]: I0318 15:06:52.439658 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:06:52 crc kubenswrapper[4857]: E0318 15:06:52.440562 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:07:03 crc kubenswrapper[4857]: I0318 15:07:03.164574 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:07:03 crc kubenswrapper[4857]: E0318 15:07:03.165418 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:07:17 crc kubenswrapper[4857]: I0318 15:07:17.177190 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:07:17 crc kubenswrapper[4857]: E0318 15:07:17.180287 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:07:32 crc kubenswrapper[4857]: I0318 15:07:32.164855 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:07:33 crc kubenswrapper[4857]: I0318 15:07:33.155876 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418"} Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.159470 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564108-pccc9"] Mar 18 15:08:00 crc kubenswrapper[4857]: E0318 15:08:00.160936 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5208021e-1875-4d69-aeb4-912c0aed0c21" containerName="oc" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.160968 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5208021e-1875-4d69-aeb4-912c0aed0c21" containerName="oc" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.161412 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5208021e-1875-4d69-aeb4-912c0aed0c21" containerName="oc" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.163631 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.168476 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.171154 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.172200 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.173008 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564108-pccc9"] Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.254918 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7kn\" (UniqueName: \"kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn\") pod \"auto-csr-approver-29564108-pccc9\" (UID: \"74d9897e-2547-447b-a090-e3ed15060a48\") " pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.356972 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h7kn\" (UniqueName: \"kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn\") pod \"auto-csr-approver-29564108-pccc9\" (UID: \"74d9897e-2547-447b-a090-e3ed15060a48\") " pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.377989 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h7kn\" (UniqueName: \"kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn\") pod \"auto-csr-approver-29564108-pccc9\" (UID: \"74d9897e-2547-447b-a090-e3ed15060a48\") " pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:00 crc kubenswrapper[4857]: I0318 15:08:00.510324 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:01 crc kubenswrapper[4857]: I0318 15:08:01.073327 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564108-pccc9"] Mar 18 15:08:01 crc kubenswrapper[4857]: I0318 15:08:01.089479 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:08:02 crc kubenswrapper[4857]: I0318 15:08:02.077773 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564108-pccc9" event={"ID":"74d9897e-2547-447b-a090-e3ed15060a48","Type":"ContainerStarted","Data":"457f662460c2f6044fbff443235f2900119af762c1beaf0ad0272b20d2c9f9e9"} Mar 18 15:08:03 crc kubenswrapper[4857]: I0318 15:08:03.248388 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564108-pccc9" event={"ID":"74d9897e-2547-447b-a090-e3ed15060a48","Type":"ContainerStarted","Data":"5b93402874347bcef0321d0c5761e2908a5ebc42ebbe382e66b8164fbef3a12f"} Mar 18 15:08:03 crc kubenswrapper[4857]: I0318 15:08:03.286195 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564108-pccc9" podStartSLOduration=1.904407252 podStartE2EDuration="3.286157339s" podCreationTimestamp="2026-03-18 15:08:00 +0000 UTC" firstStartedPulling="2026-03-18 15:08:01.089206927 +0000 UTC m=+4065.218335384" lastFinishedPulling="2026-03-18 15:08:02.470957014 +0000 UTC m=+4066.600085471" observedRunningTime="2026-03-18 15:08:03.260138782 +0000 UTC m=+4067.389267239" watchObservedRunningTime="2026-03-18 15:08:03.286157339 +0000 UTC m=+4067.415285796" Mar 18 15:08:04 crc kubenswrapper[4857]: I0318 15:08:04.311152 4857 generic.go:334] "Generic (PLEG): container finished" podID="74d9897e-2547-447b-a090-e3ed15060a48" containerID="5b93402874347bcef0321d0c5761e2908a5ebc42ebbe382e66b8164fbef3a12f" exitCode=0 Mar 18 15:08:04 crc kubenswrapper[4857]: I0318 15:08:04.311188 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564108-pccc9" event={"ID":"74d9897e-2547-447b-a090-e3ed15060a48","Type":"ContainerDied","Data":"5b93402874347bcef0321d0c5761e2908a5ebc42ebbe382e66b8164fbef3a12f"} Mar 18 15:08:05 crc kubenswrapper[4857]: I0318 15:08:05.883543 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.047250 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h7kn\" (UniqueName: \"kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn\") pod \"74d9897e-2547-447b-a090-e3ed15060a48\" (UID: \"74d9897e-2547-447b-a090-e3ed15060a48\") " Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.053264 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn" (OuterVolumeSpecName: "kube-api-access-6h7kn") pod "74d9897e-2547-447b-a090-e3ed15060a48" (UID: "74d9897e-2547-447b-a090-e3ed15060a48"). InnerVolumeSpecName "kube-api-access-6h7kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.152139 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h7kn\" (UniqueName: \"kubernetes.io/projected/74d9897e-2547-447b-a090-e3ed15060a48-kube-api-access-6h7kn\") on node \"crc\" DevicePath \"\"" Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.730986 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564108-pccc9" event={"ID":"74d9897e-2547-447b-a090-e3ed15060a48","Type":"ContainerDied","Data":"457f662460c2f6044fbff443235f2900119af762c1beaf0ad0272b20d2c9f9e9"} Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.731064 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457f662460c2f6044fbff443235f2900119af762c1beaf0ad0272b20d2c9f9e9" Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.731427 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564108-pccc9" Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.850905 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564102-88xh6"] Mar 18 15:08:06 crc kubenswrapper[4857]: I0318 15:08:06.882595 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564102-88xh6"] Mar 18 15:08:07 crc kubenswrapper[4857]: I0318 15:08:07.193683 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302de913-9b18-4b76-b971-8f7b3f76430e" path="/var/lib/kubelet/pods/302de913-9b18-4b76-b971-8f7b3f76430e/volumes" Mar 18 15:08:17 crc kubenswrapper[4857]: I0318 15:08:17.473161 4857 scope.go:117] "RemoveContainer" containerID="43225d60309af4b214168340db6203039b78babfb0a5c898d0224f70679dd9f3" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.736202 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:21 crc kubenswrapper[4857]: E0318 15:09:21.737721 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d9897e-2547-447b-a090-e3ed15060a48" containerName="oc" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.737771 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d9897e-2547-447b-a090-e3ed15060a48" containerName="oc" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.738144 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d9897e-2547-447b-a090-e3ed15060a48" containerName="oc" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.740529 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.750541 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.857727 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.858242 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjdx\" (UniqueName: \"kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.858277 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.962828 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.962979 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjdx\" (UniqueName: \"kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.963005 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.963544 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.963723 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:21 crc kubenswrapper[4857]: I0318 15:09:21.989565 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjdx\" (UniqueName: \"kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx\") pod \"redhat-operators-lfl22\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:22 crc kubenswrapper[4857]: I0318 15:09:22.078730 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:22 crc kubenswrapper[4857]: I0318 15:09:22.615397 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:23 crc kubenswrapper[4857]: I0318 15:09:23.204246 4857 generic.go:334] "Generic (PLEG): container finished" podID="d849b7f1-5b09-4200-a20a-ab484175b674" containerID="92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445" exitCode=0 Mar 18 15:09:23 crc kubenswrapper[4857]: I0318 15:09:23.204313 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerDied","Data":"92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445"} Mar 18 15:09:23 crc kubenswrapper[4857]: I0318 15:09:23.204543 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerStarted","Data":"e89f8126d4b5c67c5b2cd602fdffd8d05299bde8d9af56130ea157818e6f9371"} Mar 18 15:09:25 crc kubenswrapper[4857]: I0318 15:09:25.233125 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerStarted","Data":"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7"} Mar 18 15:09:30 crc kubenswrapper[4857]: E0318 15:09:30.680263 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd849b7f1_5b09_4200_a20a_ab484175b674.slice/crio-d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd849b7f1_5b09_4200_a20a_ab484175b674.slice/crio-conmon-d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7.scope\": RecentStats: unable to find data in memory cache]" Mar 18 15:09:31 crc kubenswrapper[4857]: I0318 15:09:31.323829 4857 generic.go:334] "Generic (PLEG): container finished" podID="d849b7f1-5b09-4200-a20a-ab484175b674" containerID="d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7" exitCode=0 Mar 18 15:09:31 crc kubenswrapper[4857]: I0318 15:09:31.323916 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerDied","Data":"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7"} Mar 18 15:09:33 crc kubenswrapper[4857]: I0318 15:09:33.357674 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerStarted","Data":"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9"} Mar 18 15:09:33 crc kubenswrapper[4857]: I0318 15:09:33.415105 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfl22" podStartSLOduration=2.8714991789999997 podStartE2EDuration="12.41505965s" podCreationTimestamp="2026-03-18 15:09:21 +0000 UTC" firstStartedPulling="2026-03-18 15:09:23.207122709 +0000 UTC m=+4147.336251176" lastFinishedPulling="2026-03-18 15:09:32.75068316 +0000 UTC m=+4156.879811647" observedRunningTime="2026-03-18 15:09:33.398905282 +0000 UTC m=+4157.528033749" watchObservedRunningTime="2026-03-18 15:09:33.41505965 +0000 UTC m=+4157.544188107" Mar 18 15:09:42 crc kubenswrapper[4857]: I0318 15:09:42.079664 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:42 crc kubenswrapper[4857]: I0318 15:09:42.080237 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:43 crc kubenswrapper[4857]: I0318 15:09:43.146496 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lfl22" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="registry-server" probeResult="failure" output=< Mar 18 15:09:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:09:43 crc kubenswrapper[4857]: > Mar 18 15:09:52 crc kubenswrapper[4857]: I0318 15:09:52.156211 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:52 crc kubenswrapper[4857]: I0318 15:09:52.210498 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:52 crc kubenswrapper[4857]: I0318 15:09:52.943309 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:53 crc kubenswrapper[4857]: I0318 15:09:53.650365 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lfl22" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="registry-server" containerID="cri-o://45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9" gracePeriod=2 Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.389352 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.528255 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkjdx\" (UniqueName: \"kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx\") pod \"d849b7f1-5b09-4200-a20a-ab484175b674\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.528573 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content\") pod \"d849b7f1-5b09-4200-a20a-ab484175b674\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.528608 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities\") pod \"d849b7f1-5b09-4200-a20a-ab484175b674\" (UID: \"d849b7f1-5b09-4200-a20a-ab484175b674\") " Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.530106 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities" (OuterVolumeSpecName: "utilities") pod "d849b7f1-5b09-4200-a20a-ab484175b674" (UID: "d849b7f1-5b09-4200-a20a-ab484175b674"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.538378 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx" (OuterVolumeSpecName: "kube-api-access-tkjdx") pod "d849b7f1-5b09-4200-a20a-ab484175b674" (UID: "d849b7f1-5b09-4200-a20a-ab484175b674"). InnerVolumeSpecName "kube-api-access-tkjdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.634199 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.634456 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkjdx\" (UniqueName: \"kubernetes.io/projected/d849b7f1-5b09-4200-a20a-ab484175b674-kube-api-access-tkjdx\") on node \"crc\" DevicePath \"\"" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.666074 4857 generic.go:334] "Generic (PLEG): container finished" podID="d849b7f1-5b09-4200-a20a-ab484175b674" containerID="45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9" exitCode=0 Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.666128 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerDied","Data":"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9"} Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.666162 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfl22" event={"ID":"d849b7f1-5b09-4200-a20a-ab484175b674","Type":"ContainerDied","Data":"e89f8126d4b5c67c5b2cd602fdffd8d05299bde8d9af56130ea157818e6f9371"} Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.666182 4857 scope.go:117] "RemoveContainer" containerID="45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.666482 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfl22" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.689323 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d849b7f1-5b09-4200-a20a-ab484175b674" (UID: "d849b7f1-5b09-4200-a20a-ab484175b674"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.703147 4857 scope.go:117] "RemoveContainer" containerID="d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.731130 4857 scope.go:117] "RemoveContainer" containerID="92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.738769 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d849b7f1-5b09-4200-a20a-ab484175b674-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.787549 4857 scope.go:117] "RemoveContainer" containerID="45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9" Mar 18 15:09:54 crc kubenswrapper[4857]: E0318 15:09:54.788401 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9\": container with ID starting with 45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9 not found: ID does not exist" containerID="45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.788459 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9"} err="failed to get container status \"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9\": rpc error: code = NotFound desc = could not find container \"45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9\": container with ID starting with 45b73eaf5dc94deee32322cf55d91ef3cd9fe81beea6f91bba5a095022882af9 not found: ID does not exist" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.788485 4857 scope.go:117] "RemoveContainer" containerID="d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7" Mar 18 15:09:54 crc kubenswrapper[4857]: E0318 15:09:54.788820 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7\": container with ID starting with d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7 not found: ID does not exist" containerID="d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.788867 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7"} err="failed to get container status \"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7\": rpc error: code = NotFound desc = could not find container \"d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7\": container with ID starting with d56e0b164ea1628f9b275531a1c17a32b29308e5b26f2101112ef18382c406f7 not found: ID does not exist" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.788890 4857 scope.go:117] "RemoveContainer" containerID="92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445" Mar 18 15:09:54 crc kubenswrapper[4857]: E0318 15:09:54.790366 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445\": container with ID starting with 92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445 not found: ID does not exist" containerID="92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445" Mar 18 15:09:54 crc kubenswrapper[4857]: I0318 15:09:54.790397 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445"} err="failed to get container status \"92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445\": rpc error: code = NotFound desc = could not find container \"92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445\": container with ID starting with 92039efefa1037341918229cef5c9512acae9409309bfb3412567a56c9e41445 not found: ID does not exist" Mar 18 15:09:55 crc kubenswrapper[4857]: I0318 15:09:55.025951 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:55 crc kubenswrapper[4857]: I0318 15:09:55.042991 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lfl22"] Mar 18 15:09:55 crc kubenswrapper[4857]: I0318 15:09:55.182817 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" path="/var/lib/kubelet/pods/d849b7f1-5b09-4200-a20a-ab484175b674/volumes" Mar 18 15:09:57 crc kubenswrapper[4857]: I0318 15:09:57.039304 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:09:57 crc kubenswrapper[4857]: I0318 15:09:57.039630 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.155850 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564110-5g4cl"] Mar 18 15:10:00 crc kubenswrapper[4857]: E0318 15:10:00.157569 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="extract-content" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.157614 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="extract-content" Mar 18 15:10:00 crc kubenswrapper[4857]: E0318 15:10:00.157733 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="extract-utilities" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.157781 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="extract-utilities" Mar 18 15:10:00 crc kubenswrapper[4857]: E0318 15:10:00.157809 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="registry-server" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.157826 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="registry-server" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.158379 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="d849b7f1-5b09-4200-a20a-ab484175b674" containerName="registry-server" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.160217 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.165315 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.165852 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.165992 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.170266 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564110-5g4cl"] Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.209865 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7tcr\" (UniqueName: \"kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr\") pod \"auto-csr-approver-29564110-5g4cl\" (UID: \"29cb6542-f295-478c-88c9-b41bfeb4b7a1\") " pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.313051 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7tcr\" (UniqueName: \"kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr\") pod \"auto-csr-approver-29564110-5g4cl\" (UID: \"29cb6542-f295-478c-88c9-b41bfeb4b7a1\") " pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.337650 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7tcr\" (UniqueName: \"kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr\") pod \"auto-csr-approver-29564110-5g4cl\" (UID: \"29cb6542-f295-478c-88c9-b41bfeb4b7a1\") " pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:00 crc kubenswrapper[4857]: I0318 15:10:00.493195 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:01 crc kubenswrapper[4857]: I0318 15:10:01.056832 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564110-5g4cl"] Mar 18 15:10:01 crc kubenswrapper[4857]: I0318 15:10:01.781898 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" event={"ID":"29cb6542-f295-478c-88c9-b41bfeb4b7a1","Type":"ContainerStarted","Data":"6cace7058ce070f97805c33ac0b00f1b753aec847dfe921ad6c24feb5d2bcfd9"} Mar 18 15:10:02 crc kubenswrapper[4857]: I0318 15:10:02.808616 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" event={"ID":"29cb6542-f295-478c-88c9-b41bfeb4b7a1","Type":"ContainerStarted","Data":"bbbbb34551698248c68f02fa67cd9fab06ced2fe0c7e701a0227697032f32417"} Mar 18 15:10:02 crc kubenswrapper[4857]: I0318 15:10:02.838110 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" podStartSLOduration=1.512920569 podStartE2EDuration="2.838081347s" podCreationTimestamp="2026-03-18 15:10:00 +0000 UTC" firstStartedPulling="2026-03-18 15:10:01.0686241 +0000 UTC m=+4185.197752557" lastFinishedPulling="2026-03-18 15:10:02.393784878 +0000 UTC m=+4186.522913335" observedRunningTime="2026-03-18 15:10:02.826806982 +0000 UTC m=+4186.955935439" watchObservedRunningTime="2026-03-18 15:10:02.838081347 +0000 UTC m=+4186.967209804" Mar 18 15:10:03 crc kubenswrapper[4857]: I0318 15:10:03.825859 4857 generic.go:334] "Generic (PLEG): container finished" podID="29cb6542-f295-478c-88c9-b41bfeb4b7a1" containerID="bbbbb34551698248c68f02fa67cd9fab06ced2fe0c7e701a0227697032f32417" exitCode=0 Mar 18 15:10:03 crc kubenswrapper[4857]: I0318 15:10:03.826126 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" event={"ID":"29cb6542-f295-478c-88c9-b41bfeb4b7a1","Type":"ContainerDied","Data":"bbbbb34551698248c68f02fa67cd9fab06ced2fe0c7e701a0227697032f32417"} Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.302664 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.383634 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7tcr\" (UniqueName: \"kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr\") pod \"29cb6542-f295-478c-88c9-b41bfeb4b7a1\" (UID: \"29cb6542-f295-478c-88c9-b41bfeb4b7a1\") " Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.392702 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr" (OuterVolumeSpecName: "kube-api-access-d7tcr") pod "29cb6542-f295-478c-88c9-b41bfeb4b7a1" (UID: "29cb6542-f295-478c-88c9-b41bfeb4b7a1"). InnerVolumeSpecName "kube-api-access-d7tcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.487219 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7tcr\" (UniqueName: \"kubernetes.io/projected/29cb6542-f295-478c-88c9-b41bfeb4b7a1-kube-api-access-d7tcr\") on node \"crc\" DevicePath \"\"" Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.857737 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" event={"ID":"29cb6542-f295-478c-88c9-b41bfeb4b7a1","Type":"ContainerDied","Data":"6cace7058ce070f97805c33ac0b00f1b753aec847dfe921ad6c24feb5d2bcfd9"} Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.857825 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cace7058ce070f97805c33ac0b00f1b753aec847dfe921ad6c24feb5d2bcfd9" Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.857921 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564110-5g4cl" Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.940113 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564104-6kqb4"] Mar 18 15:10:05 crc kubenswrapper[4857]: I0318 15:10:05.952884 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564104-6kqb4"] Mar 18 15:10:07 crc kubenswrapper[4857]: I0318 15:10:07.197686 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d645174-0aac-4db9-9091-c80f50fb218b" path="/var/lib/kubelet/pods/0d645174-0aac-4db9-9091-c80f50fb218b/volumes" Mar 18 15:10:17 crc kubenswrapper[4857]: I0318 15:10:17.605996 4857 scope.go:117] "RemoveContainer" containerID="6006edde95c6f34b175f10f2f5d6a48af6251888bb42df274eb3c7b860468d02" Mar 18 15:10:27 crc kubenswrapper[4857]: I0318 15:10:27.039241 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:10:27 crc kubenswrapper[4857]: I0318 15:10:27.039797 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.038964 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.039415 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.039473 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.040543 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.040604 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418" gracePeriod=600 Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.940697 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418" exitCode=0 Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.940798 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418"} Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.941359 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675"} Mar 18 15:10:57 crc kubenswrapper[4857]: I0318 15:10:57.941398 4857 scope.go:117] "RemoveContainer" containerID="83a23f1827bd262faaa2314393e4a9bdb42684150e40e5145b242e10d6931cec" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.642200 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:11:39 crc kubenswrapper[4857]: E0318 15:11:39.643502 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29cb6542-f295-478c-88c9-b41bfeb4b7a1" containerName="oc" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.643519 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="29cb6542-f295-478c-88c9-b41bfeb4b7a1" containerName="oc" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.643927 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="29cb6542-f295-478c-88c9-b41bfeb4b7a1" containerName="oc" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.646054 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.646153 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.782580 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.783073 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.783148 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4nx\" (UniqueName: \"kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.886304 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.886426 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.886492 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4nx\" (UniqueName: \"kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.887224 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.887340 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.909819 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4nx\" (UniqueName: \"kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx\") pod \"redhat-marketplace-5dcpr\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:39 crc kubenswrapper[4857]: I0318 15:11:39.989619 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:40 crc kubenswrapper[4857]: I0318 15:11:40.582400 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:11:41 crc kubenswrapper[4857]: I0318 15:11:41.380543 4857 generic.go:334] "Generic (PLEG): container finished" podID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerID="fa559c2d7e35b2b596ce484bc3b196202f7f590a1203a9be2d5663ae202d1724" exitCode=0 Mar 18 15:11:41 crc kubenswrapper[4857]: I0318 15:11:41.380862 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerDied","Data":"fa559c2d7e35b2b596ce484bc3b196202f7f590a1203a9be2d5663ae202d1724"} Mar 18 15:11:41 crc kubenswrapper[4857]: I0318 15:11:41.380906 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerStarted","Data":"ce6231730995f2c1e61f4ad5825877ee83847dd7fffbfc419580421abba38bf1"} Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.322606 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.326042 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.340464 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.476283 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.476585 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrxxl\" (UniqueName: \"kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:42 crc kubenswrapper[4857]: I0318 15:11:42.476964 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.003028 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.003207 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrxxl\" (UniqueName: \"kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.003399 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.004809 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.010801 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.034868 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrxxl\" (UniqueName: \"kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl\") pod \"community-operators-5f4qr\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.263570 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.428277 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerStarted","Data":"abcf0e407765e87248da881a1cfa5e31bbdd4dac1618d98288a6d62f5b6cb882"} Mar 18 15:11:43 crc kubenswrapper[4857]: I0318 15:11:43.909912 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:11:44 crc kubenswrapper[4857]: I0318 15:11:44.444705 4857 generic.go:334] "Generic (PLEG): container finished" podID="2dab8745-583d-43f4-b903-c0d373d0139e" containerID="150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762" exitCode=0 Mar 18 15:11:44 crc kubenswrapper[4857]: I0318 15:11:44.444808 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerDied","Data":"150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762"} Mar 18 15:11:44 crc kubenswrapper[4857]: I0318 15:11:44.445186 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerStarted","Data":"f81d7a310253a8c9fe4d822701bd5f65820a616c211c98009545fd37974732b7"} Mar 18 15:11:44 crc kubenswrapper[4857]: I0318 15:11:44.449336 4857 generic.go:334] "Generic (PLEG): container finished" podID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerID="abcf0e407765e87248da881a1cfa5e31bbdd4dac1618d98288a6d62f5b6cb882" exitCode=0 Mar 18 15:11:44 crc kubenswrapper[4857]: I0318 15:11:44.449397 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerDied","Data":"abcf0e407765e87248da881a1cfa5e31bbdd4dac1618d98288a6d62f5b6cb882"} Mar 18 15:11:45 crc kubenswrapper[4857]: I0318 15:11:45.462905 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerStarted","Data":"0c24f5ccf0ad79a00068d285791e01c01a19563adcbe1cef5432997bba774685"} Mar 18 15:11:45 crc kubenswrapper[4857]: I0318 15:11:45.524673 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5dcpr" podStartSLOduration=2.898269872 podStartE2EDuration="6.524653798s" podCreationTimestamp="2026-03-18 15:11:39 +0000 UTC" firstStartedPulling="2026-03-18 15:11:41.38723959 +0000 UTC m=+4285.516368047" lastFinishedPulling="2026-03-18 15:11:45.013623516 +0000 UTC m=+4289.142751973" observedRunningTime="2026-03-18 15:11:45.51878503 +0000 UTC m=+4289.647913487" watchObservedRunningTime="2026-03-18 15:11:45.524653798 +0000 UTC m=+4289.653782255" Mar 18 15:11:46 crc kubenswrapper[4857]: I0318 15:11:46.477952 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerStarted","Data":"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32"} Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.377704 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.383252 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.396026 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.468274 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.468648 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8lk\" (UniqueName: \"kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.469362 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.918534 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.918763 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8lk\" (UniqueName: \"kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.919011 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.919565 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.920310 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:47 crc kubenswrapper[4857]: I0318 15:11:47.977980 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8lk\" (UniqueName: \"kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk\") pod \"certified-operators-7wxkj\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:48 crc kubenswrapper[4857]: I0318 15:11:48.026650 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:48 crc kubenswrapper[4857]: I0318 15:11:48.690069 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:11:49 crc kubenswrapper[4857]: I0318 15:11:49.594388 4857 generic.go:334] "Generic (PLEG): container finished" podID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerID="adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be" exitCode=0 Mar 18 15:11:49 crc kubenswrapper[4857]: I0318 15:11:49.597036 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerDied","Data":"adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be"} Mar 18 15:11:49 crc kubenswrapper[4857]: I0318 15:11:49.597191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerStarted","Data":"51b3ba55544e22c87b1a73478077e9990010804ee2e7928081a1ec438ed48a36"} Mar 18 15:11:49 crc kubenswrapper[4857]: I0318 15:11:49.990768 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:49 crc kubenswrapper[4857]: I0318 15:11:49.991403 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:11:50 crc kubenswrapper[4857]: I0318 15:11:50.886635 4857 generic.go:334] "Generic (PLEG): container finished" podID="2dab8745-583d-43f4-b903-c0d373d0139e" containerID="1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32" exitCode=0 Mar 18 15:11:50 crc kubenswrapper[4857]: I0318 15:11:50.888299 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerDied","Data":"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32"} Mar 18 15:11:51 crc kubenswrapper[4857]: I0318 15:11:51.049226 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5dcpr" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" probeResult="failure" output=< Mar 18 15:11:51 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:11:51 crc kubenswrapper[4857]: > Mar 18 15:11:51 crc kubenswrapper[4857]: I0318 15:11:51.902139 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerStarted","Data":"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d"} Mar 18 15:11:51 crc kubenswrapper[4857]: I0318 15:11:51.904460 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerStarted","Data":"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5"} Mar 18 15:11:51 crc kubenswrapper[4857]: I0318 15:11:51.931651 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5f4qr" podStartSLOduration=2.997433988 podStartE2EDuration="9.931630963s" podCreationTimestamp="2026-03-18 15:11:42 +0000 UTC" firstStartedPulling="2026-03-18 15:11:44.447137916 +0000 UTC m=+4288.576266383" lastFinishedPulling="2026-03-18 15:11:51.381334851 +0000 UTC m=+4295.510463358" observedRunningTime="2026-03-18 15:11:51.926407861 +0000 UTC m=+4296.055536338" watchObservedRunningTime="2026-03-18 15:11:51.931630963 +0000 UTC m=+4296.060759420" Mar 18 15:11:53 crc kubenswrapper[4857]: I0318 15:11:53.264843 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:53 crc kubenswrapper[4857]: I0318 15:11:53.264938 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:11:54 crc kubenswrapper[4857]: I0318 15:11:54.360709 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5f4qr" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" probeResult="failure" output=< Mar 18 15:11:54 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:11:54 crc kubenswrapper[4857]: > Mar 18 15:11:54 crc kubenswrapper[4857]: I0318 15:11:54.940647 4857 generic.go:334] "Generic (PLEG): container finished" podID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerID="4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5" exitCode=0 Mar 18 15:11:54 crc kubenswrapper[4857]: I0318 15:11:54.940777 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerDied","Data":"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5"} Mar 18 15:11:55 crc kubenswrapper[4857]: I0318 15:11:55.963425 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerStarted","Data":"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18"} Mar 18 15:11:55 crc kubenswrapper[4857]: I0318 15:11:55.996006 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wxkj" podStartSLOduration=3.123429888 podStartE2EDuration="8.995969552s" podCreationTimestamp="2026-03-18 15:11:47 +0000 UTC" firstStartedPulling="2026-03-18 15:11:49.600394443 +0000 UTC m=+4293.729522900" lastFinishedPulling="2026-03-18 15:11:55.472934107 +0000 UTC m=+4299.602062564" observedRunningTime="2026-03-18 15:11:55.988294128 +0000 UTC m=+4300.117422595" watchObservedRunningTime="2026-03-18 15:11:55.995969552 +0000 UTC m=+4300.125098009" Mar 18 15:11:58 crc kubenswrapper[4857]: I0318 15:11:58.027394 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:58 crc kubenswrapper[4857]: I0318 15:11:58.027475 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:11:58 crc kubenswrapper[4857]: I0318 15:11:58.094527 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.168239 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564112-8wvnz"] Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.170894 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.174087 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.174223 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.175102 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.193306 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564112-8wvnz"] Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.354881 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlz9q\" (UniqueName: \"kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q\") pod \"auto-csr-approver-29564112-8wvnz\" (UID: \"f01b94f4-418c-47fb-932f-d11a4e443579\") " pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.458071 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlz9q\" (UniqueName: \"kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q\") pod \"auto-csr-approver-29564112-8wvnz\" (UID: \"f01b94f4-418c-47fb-932f-d11a4e443579\") " pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.482663 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlz9q\" (UniqueName: \"kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q\") pod \"auto-csr-approver-29564112-8wvnz\" (UID: \"f01b94f4-418c-47fb-932f-d11a4e443579\") " pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:00 crc kubenswrapper[4857]: I0318 15:12:00.497371 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:01 crc kubenswrapper[4857]: I0318 15:12:01.029022 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564112-8wvnz"] Mar 18 15:12:01 crc kubenswrapper[4857]: I0318 15:12:01.043871 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5dcpr" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" probeResult="failure" output=< Mar 18 15:12:01 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:12:01 crc kubenswrapper[4857]: > Mar 18 15:12:02 crc kubenswrapper[4857]: I0318 15:12:02.487262 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" event={"ID":"f01b94f4-418c-47fb-932f-d11a4e443579","Type":"ContainerStarted","Data":"f77072ecbdcd5f1031dab03704383d33d03daac0cac817ffe1b3631d8573961c"} Mar 18 15:12:05 crc kubenswrapper[4857]: I0318 15:12:05.084106 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5f4qr" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" probeResult="failure" output=< Mar 18 15:12:05 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:12:05 crc kubenswrapper[4857]: > Mar 18 15:12:05 crc kubenswrapper[4857]: I0318 15:12:05.675356 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" event={"ID":"f01b94f4-418c-47fb-932f-d11a4e443579","Type":"ContainerStarted","Data":"e136282cdf24037c1397779f912e2aa4ccd933090c5331b9dbfb991048f3667a"} Mar 18 15:12:05 crc kubenswrapper[4857]: I0318 15:12:05.695602 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" podStartSLOduration=4.359388651 podStartE2EDuration="5.695583269s" podCreationTimestamp="2026-03-18 15:12:00 +0000 UTC" firstStartedPulling="2026-03-18 15:12:01.898999254 +0000 UTC m=+4306.028127711" lastFinishedPulling="2026-03-18 15:12:03.235193872 +0000 UTC m=+4307.364322329" observedRunningTime="2026-03-18 15:12:05.694337167 +0000 UTC m=+4309.823465624" watchObservedRunningTime="2026-03-18 15:12:05.695583269 +0000 UTC m=+4309.824711726" Mar 18 15:12:06 crc kubenswrapper[4857]: I0318 15:12:06.880809 4857 generic.go:334] "Generic (PLEG): container finished" podID="f01b94f4-418c-47fb-932f-d11a4e443579" containerID="e136282cdf24037c1397779f912e2aa4ccd933090c5331b9dbfb991048f3667a" exitCode=0 Mar 18 15:12:06 crc kubenswrapper[4857]: I0318 15:12:06.881111 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" event={"ID":"f01b94f4-418c-47fb-932f-d11a4e443579","Type":"ContainerDied","Data":"e136282cdf24037c1397779f912e2aa4ccd933090c5331b9dbfb991048f3667a"} Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.088645 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.548813 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.604579 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.675222 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlz9q\" (UniqueName: \"kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q\") pod \"f01b94f4-418c-47fb-932f-d11a4e443579\" (UID: \"f01b94f4-418c-47fb-932f-d11a4e443579\") " Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.682313 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q" (OuterVolumeSpecName: "kube-api-access-vlz9q") pod "f01b94f4-418c-47fb-932f-d11a4e443579" (UID: "f01b94f4-418c-47fb-932f-d11a4e443579"). InnerVolumeSpecName "kube-api-access-vlz9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.777952 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlz9q\" (UniqueName: \"kubernetes.io/projected/f01b94f4-418c-47fb-932f-d11a4e443579-kube-api-access-vlz9q\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.930942 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" event={"ID":"f01b94f4-418c-47fb-932f-d11a4e443579","Type":"ContainerDied","Data":"f77072ecbdcd5f1031dab03704383d33d03daac0cac817ffe1b3631d8573961c"} Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.931006 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77072ecbdcd5f1031dab03704383d33d03daac0cac817ffe1b3631d8573961c" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.931150 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7wxkj" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="registry-server" containerID="cri-o://4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18" gracePeriod=2 Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.931406 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564112-8wvnz" Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.975290 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564106-4pddx"] Mar 18 15:12:08 crc kubenswrapper[4857]: I0318 15:12:08.984848 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564106-4pddx"] Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.191108 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5208021e-1875-4d69-aeb4-912c0aed0c21" path="/var/lib/kubelet/pods/5208021e-1875-4d69-aeb4-912c0aed0c21/volumes" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.506219 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.605278 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8lk\" (UniqueName: \"kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk\") pod \"8c63b30d-e65e-493f-9632-9cfeddc620dd\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.605732 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities\") pod \"8c63b30d-e65e-493f-9632-9cfeddc620dd\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.606000 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content\") pod \"8c63b30d-e65e-493f-9632-9cfeddc620dd\" (UID: \"8c63b30d-e65e-493f-9632-9cfeddc620dd\") " Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.606496 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities" (OuterVolumeSpecName: "utilities") pod "8c63b30d-e65e-493f-9632-9cfeddc620dd" (UID: "8c63b30d-e65e-493f-9632-9cfeddc620dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.606916 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.609921 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk" (OuterVolumeSpecName: "kube-api-access-ds8lk") pod "8c63b30d-e65e-493f-9632-9cfeddc620dd" (UID: "8c63b30d-e65e-493f-9632-9cfeddc620dd"). InnerVolumeSpecName "kube-api-access-ds8lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.660169 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c63b30d-e65e-493f-9632-9cfeddc620dd" (UID: "8c63b30d-e65e-493f-9632-9cfeddc620dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.708712 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c63b30d-e65e-493f-9632-9cfeddc620dd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:09 crc kubenswrapper[4857]: I0318 15:12:09.708761 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8lk\" (UniqueName: \"kubernetes.io/projected/8c63b30d-e65e-493f-9632-9cfeddc620dd-kube-api-access-ds8lk\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.210467 4857 generic.go:334] "Generic (PLEG): container finished" podID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerID="4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18" exitCode=0 Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.210744 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerDied","Data":"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18"} Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.210795 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxkj" event={"ID":"8c63b30d-e65e-493f-9632-9cfeddc620dd","Type":"ContainerDied","Data":"51b3ba55544e22c87b1a73478077e9990010804ee2e7928081a1ec438ed48a36"} Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.210814 4857 scope.go:117] "RemoveContainer" containerID="4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.212005 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxkj" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.305698 4857 scope.go:117] "RemoveContainer" containerID="4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.313840 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.325998 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7wxkj"] Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.330221 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.389102 4857 scope.go:117] "RemoveContainer" containerID="adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.460647 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.474931 4857 scope.go:117] "RemoveContainer" containerID="4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18" Mar 18 15:12:10 crc kubenswrapper[4857]: E0318 15:12:10.483991 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18\": container with ID starting with 4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18 not found: ID does not exist" containerID="4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.484059 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18"} err="failed to get container status \"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18\": rpc error: code = NotFound desc = could not find container \"4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18\": container with ID starting with 4dfa509c85fdce7652e5c3e3d36ec67376c4cbcf28b805e2c8da067da4b39c18 not found: ID does not exist" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.484097 4857 scope.go:117] "RemoveContainer" containerID="4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5" Mar 18 15:12:10 crc kubenswrapper[4857]: E0318 15:12:10.489689 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5\": container with ID starting with 4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5 not found: ID does not exist" containerID="4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.489734 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5"} err="failed to get container status \"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5\": rpc error: code = NotFound desc = could not find container \"4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5\": container with ID starting with 4da232d62343b5656a23d3d71ff39ff867aa65d4b2e9775f55a78a1fbf37d1e5 not found: ID does not exist" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.489781 4857 scope.go:117] "RemoveContainer" containerID="adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be" Mar 18 15:12:10 crc kubenswrapper[4857]: E0318 15:12:10.490514 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be\": container with ID starting with adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be not found: ID does not exist" containerID="adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be" Mar 18 15:12:10 crc kubenswrapper[4857]: I0318 15:12:10.490543 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be"} err="failed to get container status \"adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be\": rpc error: code = NotFound desc = could not find container \"adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be\": container with ID starting with adb7b57ecaa7123e14f3ee23c06fb2a4ac5e4e97b4006fe257846b23db49a8be not found: ID does not exist" Mar 18 15:12:11 crc kubenswrapper[4857]: I0318 15:12:11.241376 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" path="/var/lib/kubelet/pods/8c63b30d-e65e-493f-9632-9cfeddc620dd/volumes" Mar 18 15:12:12 crc kubenswrapper[4857]: I0318 15:12:12.694505 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:12:12 crc kubenswrapper[4857]: I0318 15:12:12.696476 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5dcpr" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" containerID="cri-o://0c24f5ccf0ad79a00068d285791e01c01a19563adcbe1cef5432997bba774685" gracePeriod=2 Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.253320 4857 generic.go:334] "Generic (PLEG): container finished" podID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerID="0c24f5ccf0ad79a00068d285791e01c01a19563adcbe1cef5432997bba774685" exitCode=0 Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.253381 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerDied","Data":"0c24f5ccf0ad79a00068d285791e01c01a19563adcbe1cef5432997bba774685"} Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.253672 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5dcpr" event={"ID":"c0377796-f4a7-420a-a2e5-1e7c93631234","Type":"ContainerDied","Data":"ce6231730995f2c1e61f4ad5825877ee83847dd7fffbfc419580421abba38bf1"} Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.253688 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6231730995f2c1e61f4ad5825877ee83847dd7fffbfc419580421abba38bf1" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.320142 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.339859 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.401521 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.418359 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities\") pod \"c0377796-f4a7-420a-a2e5-1e7c93631234\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.418870 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content\") pod \"c0377796-f4a7-420a-a2e5-1e7c93631234\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.418956 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4nx\" (UniqueName: \"kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx\") pod \"c0377796-f4a7-420a-a2e5-1e7c93631234\" (UID: \"c0377796-f4a7-420a-a2e5-1e7c93631234\") " Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.423789 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities" (OuterVolumeSpecName: "utilities") pod "c0377796-f4a7-420a-a2e5-1e7c93631234" (UID: "c0377796-f4a7-420a-a2e5-1e7c93631234"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.435133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx" (OuterVolumeSpecName: "kube-api-access-kg4nx") pod "c0377796-f4a7-420a-a2e5-1e7c93631234" (UID: "c0377796-f4a7-420a-a2e5-1e7c93631234"). InnerVolumeSpecName "kube-api-access-kg4nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.456513 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0377796-f4a7-420a-a2e5-1e7c93631234" (UID: "c0377796-f4a7-420a-a2e5-1e7c93631234"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.524236 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.524311 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4nx\" (UniqueName: \"kubernetes.io/projected/c0377796-f4a7-420a-a2e5-1e7c93631234-kube-api-access-kg4nx\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:13 crc kubenswrapper[4857]: I0318 15:12:13.524327 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0377796-f4a7-420a-a2e5-1e7c93631234-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:14 crc kubenswrapper[4857]: I0318 15:12:14.424942 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5dcpr" Mar 18 15:12:14 crc kubenswrapper[4857]: I0318 15:12:14.467213 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:12:14 crc kubenswrapper[4857]: I0318 15:12:14.481521 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5dcpr"] Mar 18 15:12:14 crc kubenswrapper[4857]: I0318 15:12:14.887955 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:12:15 crc kubenswrapper[4857]: I0318 15:12:15.181708 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" path="/var/lib/kubelet/pods/c0377796-f4a7-420a-a2e5-1e7c93631234/volumes" Mar 18 15:12:15 crc kubenswrapper[4857]: I0318 15:12:15.437791 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5f4qr" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" containerID="cri-o://ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d" gracePeriod=2 Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.197583 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.389517 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrxxl\" (UniqueName: \"kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl\") pod \"2dab8745-583d-43f4-b903-c0d373d0139e\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.389701 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content\") pod \"2dab8745-583d-43f4-b903-c0d373d0139e\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.389863 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities\") pod \"2dab8745-583d-43f4-b903-c0d373d0139e\" (UID: \"2dab8745-583d-43f4-b903-c0d373d0139e\") " Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.390634 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities" (OuterVolumeSpecName: "utilities") pod "2dab8745-583d-43f4-b903-c0d373d0139e" (UID: "2dab8745-583d-43f4-b903-c0d373d0139e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.391535 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.397048 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl" (OuterVolumeSpecName: "kube-api-access-rrxxl") pod "2dab8745-583d-43f4-b903-c0d373d0139e" (UID: "2dab8745-583d-43f4-b903-c0d373d0139e"). InnerVolumeSpecName "kube-api-access-rrxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.447641 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dab8745-583d-43f4-b903-c0d373d0139e" (UID: "2dab8745-583d-43f4-b903-c0d373d0139e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.456642 4857 generic.go:334] "Generic (PLEG): container finished" podID="2dab8745-583d-43f4-b903-c0d373d0139e" containerID="ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d" exitCode=0 Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.456695 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerDied","Data":"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d"} Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.456727 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f4qr" event={"ID":"2dab8745-583d-43f4-b903-c0d373d0139e","Type":"ContainerDied","Data":"f81d7a310253a8c9fe4d822701bd5f65820a616c211c98009545fd37974732b7"} Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.456769 4857 scope.go:117] "RemoveContainer" containerID="ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.456942 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f4qr" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.494598 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrxxl\" (UniqueName: \"kubernetes.io/projected/2dab8745-583d-43f4-b903-c0d373d0139e-kube-api-access-rrxxl\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.494850 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dab8745-583d-43f4-b903-c0d373d0139e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.499214 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.500615 4857 scope.go:117] "RemoveContainer" containerID="1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.520273 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5f4qr"] Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.533767 4857 scope.go:117] "RemoveContainer" containerID="150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.593869 4857 scope.go:117] "RemoveContainer" containerID="ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d" Mar 18 15:12:16 crc kubenswrapper[4857]: E0318 15:12:16.594439 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d\": container with ID starting with ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d not found: ID does not exist" containerID="ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.594477 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d"} err="failed to get container status \"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d\": rpc error: code = NotFound desc = could not find container \"ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d\": container with ID starting with ca310fdf52de832deefaf33ebf063aeab9d49a0b03828d27d10db5b2da48d13d not found: ID does not exist" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.594500 4857 scope.go:117] "RemoveContainer" containerID="1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32" Mar 18 15:12:16 crc kubenswrapper[4857]: E0318 15:12:16.594868 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32\": container with ID starting with 1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32 not found: ID does not exist" containerID="1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.594982 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32"} err="failed to get container status \"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32\": rpc error: code = NotFound desc = could not find container \"1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32\": container with ID starting with 1736cf164b9aa6b6ac9bb0dfab262edb8e4fedf8c08f8ca0596d476782a53d32 not found: ID does not exist" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.595124 4857 scope.go:117] "RemoveContainer" containerID="150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762" Mar 18 15:12:16 crc kubenswrapper[4857]: E0318 15:12:16.595778 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762\": container with ID starting with 150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762 not found: ID does not exist" containerID="150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762" Mar 18 15:12:16 crc kubenswrapper[4857]: I0318 15:12:16.595822 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762"} err="failed to get container status \"150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762\": rpc error: code = NotFound desc = could not find container \"150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762\": container with ID starting with 150851fe872340d954b808e7cbdb101aa5be2072d814aa064509567a50381762 not found: ID does not exist" Mar 18 15:12:17 crc kubenswrapper[4857]: I0318 15:12:17.215018 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" path="/var/lib/kubelet/pods/2dab8745-583d-43f4-b903-c0d373d0139e/volumes" Mar 18 15:12:17 crc kubenswrapper[4857]: I0318 15:12:17.791112 4857 scope.go:117] "RemoveContainer" containerID="19a3c0b00067291a95a4baea6cb8b34e7ec969430339fcec38ac236ca3567fe8" Mar 18 15:12:51 crc kubenswrapper[4857]: I0318 15:12:51.002808 4857 trace.go:236] Trace[546676954]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (18-Mar-2026 15:12:49.947) (total time: 1052ms): Mar 18 15:12:51 crc kubenswrapper[4857]: Trace[546676954]: [1.052553302s] [1.052553302s] END Mar 18 15:12:57 crc kubenswrapper[4857]: I0318 15:12:57.042912 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:12:57 crc kubenswrapper[4857]: I0318 15:12:57.044114 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:13:27 crc kubenswrapper[4857]: I0318 15:13:27.039136 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:13:27 crc kubenswrapper[4857]: I0318 15:13:27.039834 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.039369 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.039912 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.040006 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.041287 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.041406 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" gracePeriod=600 Mar 18 15:13:57 crc kubenswrapper[4857]: E0318 15:13:57.166240 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.536209 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" exitCode=0 Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.536279 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675"} Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.536403 4857 scope.go:117] "RemoveContainer" containerID="d72be83b63720b1954afd27ac84c5058b691f9e545fe1a930895b040d03b8418" Mar 18 15:13:57 crc kubenswrapper[4857]: I0318 15:13:57.537553 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:13:57 crc kubenswrapper[4857]: E0318 15:13:57.537990 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.156837 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564114-5gpm2"] Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158524 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158612 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158648 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158668 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158710 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158725 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158786 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158803 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158843 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158863 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158920 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158940 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.158975 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01b94f4-418c-47fb-932f-d11a4e443579" containerName="oc" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.158994 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01b94f4-418c-47fb-932f-d11a4e443579" containerName="oc" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.159038 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159068 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.159105 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159126 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="extract-utilities" Mar 18 15:14:00 crc kubenswrapper[4857]: E0318 15:14:00.159169 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159188 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="extract-content" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159838 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dab8745-583d-43f4-b903-c0d373d0139e" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159898 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c63b30d-e65e-493f-9632-9cfeddc620dd" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159919 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01b94f4-418c-47fb-932f-d11a4e443579" containerName="oc" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.159950 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0377796-f4a7-420a-a2e5-1e7c93631234" containerName="registry-server" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.161955 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.164580 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.164835 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.165269 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.170334 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564114-5gpm2"] Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.204674 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g27j\" (UniqueName: \"kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j\") pod \"auto-csr-approver-29564114-5gpm2\" (UID: \"5aaf5bc0-7425-4b4e-8ad2-856a929c2367\") " pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.309137 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g27j\" (UniqueName: \"kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j\") pod \"auto-csr-approver-29564114-5gpm2\" (UID: \"5aaf5bc0-7425-4b4e-8ad2-856a929c2367\") " pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.336787 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g27j\" (UniqueName: \"kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j\") pod \"auto-csr-approver-29564114-5gpm2\" (UID: \"5aaf5bc0-7425-4b4e-8ad2-856a929c2367\") " pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:00 crc kubenswrapper[4857]: I0318 15:14:00.502445 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:01 crc kubenswrapper[4857]: I0318 15:14:01.061835 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564114-5gpm2"] Mar 18 15:14:01 crc kubenswrapper[4857]: I0318 15:14:01.070048 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:14:01 crc kubenswrapper[4857]: I0318 15:14:01.606618 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" event={"ID":"5aaf5bc0-7425-4b4e-8ad2-856a929c2367","Type":"ContainerStarted","Data":"5f016aa7c1eb4158fe1103d6a7ced0e9f8da8f8d7a4ab62b95cbc7b3e155c566"} Mar 18 15:14:03 crc kubenswrapper[4857]: I0318 15:14:03.641129 4857 generic.go:334] "Generic (PLEG): container finished" podID="5aaf5bc0-7425-4b4e-8ad2-856a929c2367" containerID="fe5a5d84b6b9e2f6f05c43500476338fe543b6b9f8fb14f16008fe271ad1bc9c" exitCode=0 Mar 18 15:14:03 crc kubenswrapper[4857]: I0318 15:14:03.641991 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" event={"ID":"5aaf5bc0-7425-4b4e-8ad2-856a929c2367","Type":"ContainerDied","Data":"fe5a5d84b6b9e2f6f05c43500476338fe543b6b9f8fb14f16008fe271ad1bc9c"} Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.147831 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.294135 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g27j\" (UniqueName: \"kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j\") pod \"5aaf5bc0-7425-4b4e-8ad2-856a929c2367\" (UID: \"5aaf5bc0-7425-4b4e-8ad2-856a929c2367\") " Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.299125 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j" (OuterVolumeSpecName: "kube-api-access-7g27j") pod "5aaf5bc0-7425-4b4e-8ad2-856a929c2367" (UID: "5aaf5bc0-7425-4b4e-8ad2-856a929c2367"). InnerVolumeSpecName "kube-api-access-7g27j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.397092 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g27j\" (UniqueName: \"kubernetes.io/projected/5aaf5bc0-7425-4b4e-8ad2-856a929c2367-kube-api-access-7g27j\") on node \"crc\" DevicePath \"\"" Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.675500 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" event={"ID":"5aaf5bc0-7425-4b4e-8ad2-856a929c2367","Type":"ContainerDied","Data":"5f016aa7c1eb4158fe1103d6a7ced0e9f8da8f8d7a4ab62b95cbc7b3e155c566"} Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.675597 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f016aa7c1eb4158fe1103d6a7ced0e9f8da8f8d7a4ab62b95cbc7b3e155c566" Mar 18 15:14:05 crc kubenswrapper[4857]: I0318 15:14:05.675613 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564114-5gpm2" Mar 18 15:14:06 crc kubenswrapper[4857]: I0318 15:14:06.273199 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564108-pccc9"] Mar 18 15:14:06 crc kubenswrapper[4857]: I0318 15:14:06.289416 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564108-pccc9"] Mar 18 15:14:07 crc kubenswrapper[4857]: I0318 15:14:07.193554 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74d9897e-2547-447b-a090-e3ed15060a48" path="/var/lib/kubelet/pods/74d9897e-2547-447b-a090-e3ed15060a48/volumes" Mar 18 15:14:09 crc kubenswrapper[4857]: I0318 15:14:09.164619 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:14:09 crc kubenswrapper[4857]: E0318 15:14:09.165406 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:14:17 crc kubenswrapper[4857]: I0318 15:14:17.980508 4857 scope.go:117] "RemoveContainer" containerID="5b93402874347bcef0321d0c5761e2908a5ebc42ebbe382e66b8164fbef3a12f" Mar 18 15:14:21 crc kubenswrapper[4857]: I0318 15:14:21.165381 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:14:21 crc kubenswrapper[4857]: E0318 15:14:21.167160 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:14:36 crc kubenswrapper[4857]: I0318 15:14:36.164892 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:14:36 crc kubenswrapper[4857]: E0318 15:14:36.166025 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:14:49 crc kubenswrapper[4857]: I0318 15:14:49.172355 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:14:49 crc kubenswrapper[4857]: E0318 15:14:49.173807 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.170757 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx"] Mar 18 15:15:00 crc kubenswrapper[4857]: E0318 15:15:00.171969 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aaf5bc0-7425-4b4e-8ad2-856a929c2367" containerName="oc" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.171987 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aaf5bc0-7425-4b4e-8ad2-856a929c2367" containerName="oc" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.172271 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aaf5bc0-7425-4b4e-8ad2-856a929c2367" containerName="oc" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.173330 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.175871 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.176053 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.184709 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx"] Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.272116 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.273064 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.273791 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zs6c\" (UniqueName: \"kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.376027 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.376148 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.376280 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zs6c\" (UniqueName: \"kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.377643 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.384699 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.395529 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zs6c\" (UniqueName: \"kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c\") pod \"collect-profiles-29564115-7xbsx\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:00 crc kubenswrapper[4857]: I0318 15:15:00.502296 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:01 crc kubenswrapper[4857]: I0318 15:15:01.016985 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx"] Mar 18 15:15:01 crc kubenswrapper[4857]: I0318 15:15:01.165153 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:15:01 crc kubenswrapper[4857]: E0318 15:15:01.165622 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:15:01 crc kubenswrapper[4857]: I0318 15:15:01.443275 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" event={"ID":"817f8522-e480-4288-b6a9-434f729fef33","Type":"ContainerStarted","Data":"6ee104bff8e5d48811b882d1aa6a2d41473afaf25027b31ec7f1a523bca2f039"} Mar 18 15:15:01 crc kubenswrapper[4857]: I0318 15:15:01.443347 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" event={"ID":"817f8522-e480-4288-b6a9-434f729fef33","Type":"ContainerStarted","Data":"bc1fd52faa65f44fc4d20dede1bf76c9afdfba5826864ee074e9e30ce0df43a5"} Mar 18 15:15:01 crc kubenswrapper[4857]: I0318 15:15:01.469062 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" podStartSLOduration=1.4690052599999999 podStartE2EDuration="1.46900526s" podCreationTimestamp="2026-03-18 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 15:15:01.462919876 +0000 UTC m=+4485.592048373" watchObservedRunningTime="2026-03-18 15:15:01.46900526 +0000 UTC m=+4485.598133717" Mar 18 15:15:02 crc kubenswrapper[4857]: I0318 15:15:02.462789 4857 generic.go:334] "Generic (PLEG): container finished" podID="817f8522-e480-4288-b6a9-434f729fef33" containerID="6ee104bff8e5d48811b882d1aa6a2d41473afaf25027b31ec7f1a523bca2f039" exitCode=0 Mar 18 15:15:02 crc kubenswrapper[4857]: I0318 15:15:02.463366 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" event={"ID":"817f8522-e480-4288-b6a9-434f729fef33","Type":"ContainerDied","Data":"6ee104bff8e5d48811b882d1aa6a2d41473afaf25027b31ec7f1a523bca2f039"} Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.489159 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" event={"ID":"817f8522-e480-4288-b6a9-434f729fef33","Type":"ContainerDied","Data":"bc1fd52faa65f44fc4d20dede1bf76c9afdfba5826864ee074e9e30ce0df43a5"} Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.489555 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc1fd52faa65f44fc4d20dede1bf76c9afdfba5826864ee074e9e30ce0df43a5" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.519607 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.638239 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume\") pod \"817f8522-e480-4288-b6a9-434f729fef33\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.638576 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zs6c\" (UniqueName: \"kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c\") pod \"817f8522-e480-4288-b6a9-434f729fef33\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.638698 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume\") pod \"817f8522-e480-4288-b6a9-434f729fef33\" (UID: \"817f8522-e480-4288-b6a9-434f729fef33\") " Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.639227 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume" (OuterVolumeSpecName: "config-volume") pod "817f8522-e480-4288-b6a9-434f729fef33" (UID: "817f8522-e480-4288-b6a9-434f729fef33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.645435 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "817f8522-e480-4288-b6a9-434f729fef33" (UID: "817f8522-e480-4288-b6a9-434f729fef33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.656056 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c" (OuterVolumeSpecName: "kube-api-access-9zs6c") pod "817f8522-e480-4288-b6a9-434f729fef33" (UID: "817f8522-e480-4288-b6a9-434f729fef33"). InnerVolumeSpecName "kube-api-access-9zs6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.742460 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f8522-e480-4288-b6a9-434f729fef33-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.742517 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zs6c\" (UniqueName: \"kubernetes.io/projected/817f8522-e480-4288-b6a9-434f729fef33-kube-api-access-9zs6c\") on node \"crc\" DevicePath \"\"" Mar 18 15:15:04 crc kubenswrapper[4857]: I0318 15:15:04.742555 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/817f8522-e480-4288-b6a9-434f729fef33-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:15:05 crc kubenswrapper[4857]: I0318 15:15:05.506706 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564115-7xbsx" Mar 18 15:15:05 crc kubenswrapper[4857]: I0318 15:15:05.624058 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr"] Mar 18 15:15:05 crc kubenswrapper[4857]: I0318 15:15:05.641984 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564070-868gr"] Mar 18 15:15:07 crc kubenswrapper[4857]: I0318 15:15:07.177588 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f42948a-83fe-49d9-85a5-8a9c14e87b71" path="/var/lib/kubelet/pods/3f42948a-83fe-49d9-85a5-8a9c14e87b71/volumes" Mar 18 15:15:16 crc kubenswrapper[4857]: I0318 15:15:16.252892 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:15:16 crc kubenswrapper[4857]: E0318 15:15:16.254968 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:15:18 crc kubenswrapper[4857]: I0318 15:15:18.123314 4857 scope.go:117] "RemoveContainer" containerID="46e0eca70d959e891cd19a9591a50c82e0d3d19247883022ff70420c81089597" Mar 18 15:15:28 crc kubenswrapper[4857]: I0318 15:15:28.448836 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:15:28 crc kubenswrapper[4857]: E0318 15:15:28.452359 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:15:40 crc kubenswrapper[4857]: I0318 15:15:40.164898 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:15:40 crc kubenswrapper[4857]: E0318 15:15:40.166405 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:15:53 crc kubenswrapper[4857]: I0318 15:15:53.164599 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:15:53 crc kubenswrapper[4857]: E0318 15:15:53.165624 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.178096 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564116-px58j"] Mar 18 15:16:00 crc kubenswrapper[4857]: E0318 15:16:00.179938 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817f8522-e480-4288-b6a9-434f729fef33" containerName="collect-profiles" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.179972 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="817f8522-e480-4288-b6a9-434f729fef33" containerName="collect-profiles" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.180575 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="817f8522-e480-4288-b6a9-434f729fef33" containerName="collect-profiles" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.182546 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.185708 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.186618 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.187156 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.190107 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwf7\" (UniqueName: \"kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7\") pod \"auto-csr-approver-29564116-px58j\" (UID: \"aaff3533-e6de-4d5b-83ff-51559c41d738\") " pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.197065 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564116-px58j"] Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.292197 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwf7\" (UniqueName: \"kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7\") pod \"auto-csr-approver-29564116-px58j\" (UID: \"aaff3533-e6de-4d5b-83ff-51559c41d738\") " pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.320438 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwf7\" (UniqueName: \"kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7\") pod \"auto-csr-approver-29564116-px58j\" (UID: \"aaff3533-e6de-4d5b-83ff-51559c41d738\") " pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:00 crc kubenswrapper[4857]: I0318 15:16:00.526474 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:01 crc kubenswrapper[4857]: I0318 15:16:01.117413 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564116-px58j"] Mar 18 15:16:02 crc kubenswrapper[4857]: I0318 15:16:02.038517 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564116-px58j" event={"ID":"aaff3533-e6de-4d5b-83ff-51559c41d738","Type":"ContainerStarted","Data":"397420106b0bd092e21b5b43dea4732d0dbb8d9bdb73cc6b2db4f3c7d62f3c38"} Mar 18 15:16:04 crc kubenswrapper[4857]: I0318 15:16:04.314631 4857 generic.go:334] "Generic (PLEG): container finished" podID="aaff3533-e6de-4d5b-83ff-51559c41d738" containerID="ec2f4fd3c9c56adc458498f2a8601c3a4c1b9d8b5005aade8a30721f2fa07058" exitCode=0 Mar 18 15:16:04 crc kubenswrapper[4857]: I0318 15:16:04.314822 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564116-px58j" event={"ID":"aaff3533-e6de-4d5b-83ff-51559c41d738","Type":"ContainerDied","Data":"ec2f4fd3c9c56adc458498f2a8601c3a4c1b9d8b5005aade8a30721f2fa07058"} Mar 18 15:16:05 crc kubenswrapper[4857]: I0318 15:16:05.834287 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:05 crc kubenswrapper[4857]: I0318 15:16:05.991795 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwf7\" (UniqueName: \"kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7\") pod \"aaff3533-e6de-4d5b-83ff-51559c41d738\" (UID: \"aaff3533-e6de-4d5b-83ff-51559c41d738\") " Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.003605 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7" (OuterVolumeSpecName: "kube-api-access-xtwf7") pod "aaff3533-e6de-4d5b-83ff-51559c41d738" (UID: "aaff3533-e6de-4d5b-83ff-51559c41d738"). InnerVolumeSpecName "kube-api-access-xtwf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.095108 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwf7\" (UniqueName: \"kubernetes.io/projected/aaff3533-e6de-4d5b-83ff-51559c41d738-kube-api-access-xtwf7\") on node \"crc\" DevicePath \"\"" Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.357494 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564116-px58j" event={"ID":"aaff3533-e6de-4d5b-83ff-51559c41d738","Type":"ContainerDied","Data":"397420106b0bd092e21b5b43dea4732d0dbb8d9bdb73cc6b2db4f3c7d62f3c38"} Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.357549 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="397420106b0bd092e21b5b43dea4732d0dbb8d9bdb73cc6b2db4f3c7d62f3c38" Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.357566 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564116-px58j" Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.934768 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564110-5g4cl"] Mar 18 15:16:06 crc kubenswrapper[4857]: I0318 15:16:06.944063 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564110-5g4cl"] Mar 18 15:16:07 crc kubenswrapper[4857]: I0318 15:16:07.181832 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29cb6542-f295-478c-88c9-b41bfeb4b7a1" path="/var/lib/kubelet/pods/29cb6542-f295-478c-88c9-b41bfeb4b7a1/volumes" Mar 18 15:16:08 crc kubenswrapper[4857]: I0318 15:16:08.453157 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:16:08 crc kubenswrapper[4857]: E0318 15:16:08.453898 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:16:18 crc kubenswrapper[4857]: I0318 15:16:18.311860 4857 scope.go:117] "RemoveContainer" containerID="bbbbb34551698248c68f02fa67cd9fab06ced2fe0c7e701a0227697032f32417" Mar 18 15:16:20 crc kubenswrapper[4857]: I0318 15:16:20.363335 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:16:20 crc kubenswrapper[4857]: E0318 15:16:20.367028 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:16:33 crc kubenswrapper[4857]: I0318 15:16:33.164737 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:16:33 crc kubenswrapper[4857]: E0318 15:16:33.165729 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:16:45 crc kubenswrapper[4857]: I0318 15:16:45.166912 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:16:45 crc kubenswrapper[4857]: E0318 15:16:45.167880 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:16:58 crc kubenswrapper[4857]: I0318 15:16:58.164597 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:16:58 crc kubenswrapper[4857]: E0318 15:16:58.165959 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:17:11 crc kubenswrapper[4857]: I0318 15:17:11.165609 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:17:11 crc kubenswrapper[4857]: E0318 15:17:11.166712 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:17:22 crc kubenswrapper[4857]: I0318 15:17:22.165081 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:17:22 crc kubenswrapper[4857]: E0318 15:17:22.166323 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:17:35 crc kubenswrapper[4857]: I0318 15:17:35.346764 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:17:35 crc kubenswrapper[4857]: E0318 15:17:35.349011 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:17:49 crc kubenswrapper[4857]: I0318 15:17:49.375735 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:17:49 crc kubenswrapper[4857]: E0318 15:17:49.376551 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.168554 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564118-w5rzn"] Mar 18 15:18:00 crc kubenswrapper[4857]: E0318 15:18:00.170072 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaff3533-e6de-4d5b-83ff-51559c41d738" containerName="oc" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.170112 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaff3533-e6de-4d5b-83ff-51559c41d738" containerName="oc" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.170564 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaff3533-e6de-4d5b-83ff-51559c41d738" containerName="oc" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.171959 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.175117 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.175158 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.182567 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.187530 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564118-w5rzn"] Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.218562 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mvg\" (UniqueName: \"kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg\") pod \"auto-csr-approver-29564118-w5rzn\" (UID: \"175ec8bb-145b-4b77-b6f6-2d52e10b5f31\") " pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.322110 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mvg\" (UniqueName: \"kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg\") pod \"auto-csr-approver-29564118-w5rzn\" (UID: \"175ec8bb-145b-4b77-b6f6-2d52e10b5f31\") " pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.342938 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mvg\" (UniqueName: \"kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg\") pod \"auto-csr-approver-29564118-w5rzn\" (UID: \"175ec8bb-145b-4b77-b6f6-2d52e10b5f31\") " pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:00 crc kubenswrapper[4857]: I0318 15:18:00.507013 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:01 crc kubenswrapper[4857]: I0318 15:18:01.023515 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564118-w5rzn"] Mar 18 15:18:02 crc kubenswrapper[4857]: I0318 15:18:02.037480 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" event={"ID":"175ec8bb-145b-4b77-b6f6-2d52e10b5f31","Type":"ContainerStarted","Data":"dd3734866f525d082b95b4be8e0d85080425ea3fea71d6206a7d537d527a3fa2"} Mar 18 15:18:03 crc kubenswrapper[4857]: I0318 15:18:03.049499 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" event={"ID":"175ec8bb-145b-4b77-b6f6-2d52e10b5f31","Type":"ContainerStarted","Data":"d87469374ce9b44ebf148dd6915ea18f572697acf385bb84a814e235a22a7b2c"} Mar 18 15:18:03 crc kubenswrapper[4857]: I0318 15:18:03.070869 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" podStartSLOduration=1.69580497 podStartE2EDuration="3.070826517s" podCreationTimestamp="2026-03-18 15:18:00 +0000 UTC" firstStartedPulling="2026-03-18 15:18:01.028859736 +0000 UTC m=+4665.157988213" lastFinishedPulling="2026-03-18 15:18:02.403881293 +0000 UTC m=+4666.533009760" observedRunningTime="2026-03-18 15:18:03.065404361 +0000 UTC m=+4667.194532828" watchObservedRunningTime="2026-03-18 15:18:03.070826517 +0000 UTC m=+4667.199954974" Mar 18 15:18:04 crc kubenswrapper[4857]: I0318 15:18:04.063323 4857 generic.go:334] "Generic (PLEG): container finished" podID="175ec8bb-145b-4b77-b6f6-2d52e10b5f31" containerID="d87469374ce9b44ebf148dd6915ea18f572697acf385bb84a814e235a22a7b2c" exitCode=0 Mar 18 15:18:04 crc kubenswrapper[4857]: I0318 15:18:04.063394 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" event={"ID":"175ec8bb-145b-4b77-b6f6-2d52e10b5f31","Type":"ContainerDied","Data":"d87469374ce9b44ebf148dd6915ea18f572697acf385bb84a814e235a22a7b2c"} Mar 18 15:18:04 crc kubenswrapper[4857]: I0318 15:18:04.164651 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:18:04 crc kubenswrapper[4857]: E0318 15:18:04.165201 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:18:05 crc kubenswrapper[4857]: I0318 15:18:05.561605 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:05 crc kubenswrapper[4857]: I0318 15:18:05.685073 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2mvg\" (UniqueName: \"kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg\") pod \"175ec8bb-145b-4b77-b6f6-2d52e10b5f31\" (UID: \"175ec8bb-145b-4b77-b6f6-2d52e10b5f31\") " Mar 18 15:18:05 crc kubenswrapper[4857]: I0318 15:18:05.695201 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg" (OuterVolumeSpecName: "kube-api-access-m2mvg") pod "175ec8bb-145b-4b77-b6f6-2d52e10b5f31" (UID: "175ec8bb-145b-4b77-b6f6-2d52e10b5f31"). InnerVolumeSpecName "kube-api-access-m2mvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:18:05 crc kubenswrapper[4857]: I0318 15:18:05.789054 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2mvg\" (UniqueName: \"kubernetes.io/projected/175ec8bb-145b-4b77-b6f6-2d52e10b5f31-kube-api-access-m2mvg\") on node \"crc\" DevicePath \"\"" Mar 18 15:18:06 crc kubenswrapper[4857]: I0318 15:18:06.096482 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" event={"ID":"175ec8bb-145b-4b77-b6f6-2d52e10b5f31","Type":"ContainerDied","Data":"dd3734866f525d082b95b4be8e0d85080425ea3fea71d6206a7d537d527a3fa2"} Mar 18 15:18:06 crc kubenswrapper[4857]: I0318 15:18:06.096578 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd3734866f525d082b95b4be8e0d85080425ea3fea71d6206a7d537d527a3fa2" Mar 18 15:18:06 crc kubenswrapper[4857]: I0318 15:18:06.096585 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564118-w5rzn" Mar 18 15:18:06 crc kubenswrapper[4857]: I0318 15:18:06.175093 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564112-8wvnz"] Mar 18 15:18:06 crc kubenswrapper[4857]: I0318 15:18:06.188479 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564112-8wvnz"] Mar 18 15:18:07 crc kubenswrapper[4857]: I0318 15:18:07.232129 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01b94f4-418c-47fb-932f-d11a4e443579" path="/var/lib/kubelet/pods/f01b94f4-418c-47fb-932f-d11a4e443579/volumes" Mar 18 15:18:18 crc kubenswrapper[4857]: I0318 15:18:18.165092 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:18:18 crc kubenswrapper[4857]: E0318 15:18:18.166476 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:18:18 crc kubenswrapper[4857]: I0318 15:18:18.477318 4857 scope.go:117] "RemoveContainer" containerID="abcf0e407765e87248da881a1cfa5e31bbdd4dac1618d98288a6d62f5b6cb882" Mar 18 15:18:18 crc kubenswrapper[4857]: I0318 15:18:18.522247 4857 scope.go:117] "RemoveContainer" containerID="fa559c2d7e35b2b596ce484bc3b196202f7f590a1203a9be2d5663ae202d1724" Mar 18 15:18:18 crc kubenswrapper[4857]: I0318 15:18:18.593841 4857 scope.go:117] "RemoveContainer" containerID="e136282cdf24037c1397779f912e2aa4ccd933090c5331b9dbfb991048f3667a" Mar 18 15:18:18 crc kubenswrapper[4857]: I0318 15:18:18.693478 4857 scope.go:117] "RemoveContainer" containerID="0c24f5ccf0ad79a00068d285791e01c01a19563adcbe1cef5432997bba774685" Mar 18 15:18:30 crc kubenswrapper[4857]: I0318 15:18:30.164455 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:18:30 crc kubenswrapper[4857]: E0318 15:18:30.165880 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:18:42 crc kubenswrapper[4857]: I0318 15:18:42.163844 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:18:42 crc kubenswrapper[4857]: E0318 15:18:42.164608 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:18:57 crc kubenswrapper[4857]: I0318 15:18:57.165251 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:18:59 crc kubenswrapper[4857]: I0318 15:18:59.070250 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8"} Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.190034 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564120-tvzt6"] Mar 18 15:20:00 crc kubenswrapper[4857]: E0318 15:20:00.191294 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175ec8bb-145b-4b77-b6f6-2d52e10b5f31" containerName="oc" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.191314 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="175ec8bb-145b-4b77-b6f6-2d52e10b5f31" containerName="oc" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.191770 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="175ec8bb-145b-4b77-b6f6-2d52e10b5f31" containerName="oc" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.192993 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.195880 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.196177 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.196304 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.211373 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564120-tvzt6"] Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.344515 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brgcz\" (UniqueName: \"kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz\") pod \"auto-csr-approver-29564120-tvzt6\" (UID: \"163be1db-a9a5-4188-8d3e-65468f40167e\") " pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.447473 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brgcz\" (UniqueName: \"kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz\") pod \"auto-csr-approver-29564120-tvzt6\" (UID: \"163be1db-a9a5-4188-8d3e-65468f40167e\") " pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.489649 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brgcz\" (UniqueName: \"kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz\") pod \"auto-csr-approver-29564120-tvzt6\" (UID: \"163be1db-a9a5-4188-8d3e-65468f40167e\") " pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:00 crc kubenswrapper[4857]: I0318 15:20:00.519804 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:01 crc kubenswrapper[4857]: I0318 15:20:01.450611 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564120-tvzt6"] Mar 18 15:20:01 crc kubenswrapper[4857]: I0318 15:20:01.456169 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:20:01 crc kubenswrapper[4857]: I0318 15:20:01.924171 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" event={"ID":"163be1db-a9a5-4188-8d3e-65468f40167e","Type":"ContainerStarted","Data":"ebfd6c2538bb65fe618b3cbd441e67ea759239865a72b999e50c945ff3fec23f"} Mar 18 15:20:03 crc kubenswrapper[4857]: I0318 15:20:03.967289 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" event={"ID":"163be1db-a9a5-4188-8d3e-65468f40167e","Type":"ContainerStarted","Data":"300f5db6e93930ca02507d5295fd5064d04bdf1d42186e032ec6077e70bcfe70"} Mar 18 15:20:04 crc kubenswrapper[4857]: I0318 15:20:04.027276 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" podStartSLOduration=2.466752696 podStartE2EDuration="4.027238961s" podCreationTimestamp="2026-03-18 15:20:00 +0000 UTC" firstStartedPulling="2026-03-18 15:20:01.455948678 +0000 UTC m=+4785.585077135" lastFinishedPulling="2026-03-18 15:20:03.016434943 +0000 UTC m=+4787.145563400" observedRunningTime="2026-03-18 15:20:03.991024657 +0000 UTC m=+4788.120153134" watchObservedRunningTime="2026-03-18 15:20:04.027238961 +0000 UTC m=+4788.156367418" Mar 18 15:20:04 crc kubenswrapper[4857]: I0318 15:20:04.984728 4857 generic.go:334] "Generic (PLEG): container finished" podID="163be1db-a9a5-4188-8d3e-65468f40167e" containerID="300f5db6e93930ca02507d5295fd5064d04bdf1d42186e032ec6077e70bcfe70" exitCode=0 Mar 18 15:20:04 crc kubenswrapper[4857]: I0318 15:20:04.984884 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" event={"ID":"163be1db-a9a5-4188-8d3e-65468f40167e","Type":"ContainerDied","Data":"300f5db6e93930ca02507d5295fd5064d04bdf1d42186e032ec6077e70bcfe70"} Mar 18 15:20:06 crc kubenswrapper[4857]: I0318 15:20:06.630717 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:06 crc kubenswrapper[4857]: I0318 15:20:06.706892 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brgcz\" (UniqueName: \"kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz\") pod \"163be1db-a9a5-4188-8d3e-65468f40167e\" (UID: \"163be1db-a9a5-4188-8d3e-65468f40167e\") " Mar 18 15:20:06 crc kubenswrapper[4857]: I0318 15:20:06.722975 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz" (OuterVolumeSpecName: "kube-api-access-brgcz") pod "163be1db-a9a5-4188-8d3e-65468f40167e" (UID: "163be1db-a9a5-4188-8d3e-65468f40167e"). InnerVolumeSpecName "kube-api-access-brgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:20:06 crc kubenswrapper[4857]: I0318 15:20:06.810017 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brgcz\" (UniqueName: \"kubernetes.io/projected/163be1db-a9a5-4188-8d3e-65468f40167e-kube-api-access-brgcz\") on node \"crc\" DevicePath \"\"" Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.015933 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" event={"ID":"163be1db-a9a5-4188-8d3e-65468f40167e","Type":"ContainerDied","Data":"ebfd6c2538bb65fe618b3cbd441e67ea759239865a72b999e50c945ff3fec23f"} Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.016237 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebfd6c2538bb65fe618b3cbd441e67ea759239865a72b999e50c945ff3fec23f" Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.016126 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564120-tvzt6" Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.084331 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564114-5gpm2"] Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.098681 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564114-5gpm2"] Mar 18 15:20:07 crc kubenswrapper[4857]: I0318 15:20:07.180598 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aaf5bc0-7425-4b4e-8ad2-856a929c2367" path="/var/lib/kubelet/pods/5aaf5bc0-7425-4b4e-8ad2-856a929c2367/volumes" Mar 18 15:20:18 crc kubenswrapper[4857]: I0318 15:20:18.960261 4857 scope.go:117] "RemoveContainer" containerID="fe5a5d84b6b9e2f6f05c43500476338fe543b6b9f8fb14f16008fe271ad1bc9c" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.588575 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:20:32 crc kubenswrapper[4857]: E0318 15:20:32.594308 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163be1db-a9a5-4188-8d3e-65468f40167e" containerName="oc" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.594345 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="163be1db-a9a5-4188-8d3e-65468f40167e" containerName="oc" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.594717 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="163be1db-a9a5-4188-8d3e-65468f40167e" containerName="oc" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.597757 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.605546 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.753617 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.754300 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwnvg\" (UniqueName: \"kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:32 crc kubenswrapper[4857]: I0318 15:20:32.754491 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.139756 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwnvg\" (UniqueName: \"kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.139871 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.139932 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.140486 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.140718 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.191653 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwnvg\" (UniqueName: \"kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg\") pod \"redhat-operators-tp2xb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.250838 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:33 crc kubenswrapper[4857]: I0318 15:20:33.782590 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:20:34 crc kubenswrapper[4857]: I0318 15:20:34.823085 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerStarted","Data":"c4e8d5a5655d7876bd575d40e4a799cebd9753c052c1bab819d38471f6bfd4c4"} Mar 18 15:20:35 crc kubenswrapper[4857]: I0318 15:20:35.840376 4857 generic.go:334] "Generic (PLEG): container finished" podID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerID="c3093f2e5f7995ab340944f32036258bf280096df9f826fd8cbcb3278bcc9295" exitCode=0 Mar 18 15:20:35 crc kubenswrapper[4857]: I0318 15:20:35.840457 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerDied","Data":"c3093f2e5f7995ab340944f32036258bf280096df9f826fd8cbcb3278bcc9295"} Mar 18 15:20:37 crc kubenswrapper[4857]: I0318 15:20:37.991118 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerStarted","Data":"b0a3c50fddfb8a04b1c57477401432c9b5a66a4e1fe01af164729bac51c54884"} Mar 18 15:20:44 crc kubenswrapper[4857]: I0318 15:20:44.381034 4857 generic.go:334] "Generic (PLEG): container finished" podID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerID="b0a3c50fddfb8a04b1c57477401432c9b5a66a4e1fe01af164729bac51c54884" exitCode=0 Mar 18 15:20:44 crc kubenswrapper[4857]: I0318 15:20:44.381111 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerDied","Data":"b0a3c50fddfb8a04b1c57477401432c9b5a66a4e1fe01af164729bac51c54884"} Mar 18 15:20:46 crc kubenswrapper[4857]: I0318 15:20:46.423159 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerStarted","Data":"a68c7e6cf4bd154695fc69bb73a1cd03b0017bebb78e5960ae8281e4300ca02d"} Mar 18 15:20:46 crc kubenswrapper[4857]: I0318 15:20:46.457028 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tp2xb" podStartSLOduration=4.737844303 podStartE2EDuration="14.457004794s" podCreationTimestamp="2026-03-18 15:20:32 +0000 UTC" firstStartedPulling="2026-03-18 15:20:35.844167779 +0000 UTC m=+4819.973296246" lastFinishedPulling="2026-03-18 15:20:45.56332827 +0000 UTC m=+4829.692456737" observedRunningTime="2026-03-18 15:20:46.444743464 +0000 UTC m=+4830.573871921" watchObservedRunningTime="2026-03-18 15:20:46.457004794 +0000 UTC m=+4830.586133251" Mar 18 15:20:53 crc kubenswrapper[4857]: I0318 15:20:53.251683 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:53 crc kubenswrapper[4857]: I0318 15:20:53.252326 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:20:54 crc kubenswrapper[4857]: I0318 15:20:54.764564 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp2xb" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" probeResult="failure" output=< Mar 18 15:20:54 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:20:54 crc kubenswrapper[4857]: > Mar 18 15:21:04 crc kubenswrapper[4857]: I0318 15:21:04.314582 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp2xb" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" probeResult="failure" output=< Mar 18 15:21:04 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:21:04 crc kubenswrapper[4857]: > Mar 18 15:21:14 crc kubenswrapper[4857]: I0318 15:21:14.323809 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp2xb" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" probeResult="failure" output=< Mar 18 15:21:14 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:21:14 crc kubenswrapper[4857]: > Mar 18 15:21:23 crc kubenswrapper[4857]: I0318 15:21:23.339974 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:21:23 crc kubenswrapper[4857]: I0318 15:21:23.422239 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:21:23 crc kubenswrapper[4857]: I0318 15:21:23.607708 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:21:24 crc kubenswrapper[4857]: I0318 15:21:24.771264 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tp2xb" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" containerID="cri-o://a68c7e6cf4bd154695fc69bb73a1cd03b0017bebb78e5960ae8281e4300ca02d" gracePeriod=2 Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.789213 4857 generic.go:334] "Generic (PLEG): container finished" podID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerID="a68c7e6cf4bd154695fc69bb73a1cd03b0017bebb78e5960ae8281e4300ca02d" exitCode=0 Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.789298 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerDied","Data":"a68c7e6cf4bd154695fc69bb73a1cd03b0017bebb78e5960ae8281e4300ca02d"} Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.789576 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xb" event={"ID":"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb","Type":"ContainerDied","Data":"c4e8d5a5655d7876bd575d40e4a799cebd9753c052c1bab819d38471f6bfd4c4"} Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.789605 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e8d5a5655d7876bd575d40e4a799cebd9753c052c1bab819d38471f6bfd4c4" Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.861373 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.892745 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities\") pod \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.893265 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content\") pod \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.893540 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwnvg\" (UniqueName: \"kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg\") pod \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\" (UID: \"b185c56d-5eb4-45b3-a9ee-7bd8603c48cb\") " Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.894705 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities" (OuterVolumeSpecName: "utilities") pod "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" (UID: "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.899352 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg" (OuterVolumeSpecName: "kube-api-access-jwnvg") pod "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" (UID: "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb"). InnerVolumeSpecName "kube-api-access-jwnvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.998818 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:21:25 crc kubenswrapper[4857]: I0318 15:21:25.998850 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwnvg\" (UniqueName: \"kubernetes.io/projected/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-kube-api-access-jwnvg\") on node \"crc\" DevicePath \"\"" Mar 18 15:21:26 crc kubenswrapper[4857]: I0318 15:21:26.033599 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" (UID: "b185c56d-5eb4-45b3-a9ee-7bd8603c48cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:21:26 crc kubenswrapper[4857]: I0318 15:21:26.101673 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:21:26 crc kubenswrapper[4857]: I0318 15:21:26.814230 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xb" Mar 18 15:21:26 crc kubenswrapper[4857]: I0318 15:21:26.893204 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:21:26 crc kubenswrapper[4857]: I0318 15:21:26.919147 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xb"] Mar 18 15:21:27 crc kubenswrapper[4857]: I0318 15:21:27.038933 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:21:27 crc kubenswrapper[4857]: I0318 15:21:27.039350 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:21:27 crc kubenswrapper[4857]: I0318 15:21:27.177980 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" path="/var/lib/kubelet/pods/b185c56d-5eb4-45b3-a9ee-7bd8603c48cb/volumes" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.291613 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:21:50 crc kubenswrapper[4857]: E0318 15:21:50.293513 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.293572 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" Mar 18 15:21:50 crc kubenswrapper[4857]: E0318 15:21:50.293609 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="extract-utilities" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.293625 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="extract-utilities" Mar 18 15:21:50 crc kubenswrapper[4857]: E0318 15:21:50.293670 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="extract-content" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.293684 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="extract-content" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.294312 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="b185c56d-5eb4-45b3-a9ee-7bd8603c48cb" containerName="registry-server" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.298262 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.311331 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.462276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.462432 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.464013 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk9r9\" (UniqueName: \"kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.567065 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.567304 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk9r9\" (UniqueName: \"kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.567661 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.567993 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.568256 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.601674 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk9r9\" (UniqueName: \"kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9\") pod \"certified-operators-8l68v\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:50 crc kubenswrapper[4857]: I0318 15:21:50.634573 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:21:51 crc kubenswrapper[4857]: I0318 15:21:51.522454 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:21:51 crc kubenswrapper[4857]: I0318 15:21:51.636097 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerStarted","Data":"a7b07d66e03843b5d9a8873a5901d459ee18e663f1ea0405b25b8bd1a2ce0730"} Mar 18 15:21:52 crc kubenswrapper[4857]: I0318 15:21:52.656521 4857 generic.go:334] "Generic (PLEG): container finished" podID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerID="3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3" exitCode=0 Mar 18 15:21:52 crc kubenswrapper[4857]: I0318 15:21:52.656620 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerDied","Data":"3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3"} Mar 18 15:21:54 crc kubenswrapper[4857]: I0318 15:21:54.736628 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerStarted","Data":"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71"} Mar 18 15:21:55 crc kubenswrapper[4857]: I0318 15:21:55.756217 4857 generic.go:334] "Generic (PLEG): container finished" podID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerID="2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71" exitCode=0 Mar 18 15:21:55 crc kubenswrapper[4857]: I0318 15:21:55.756339 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerDied","Data":"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71"} Mar 18 15:21:57 crc kubenswrapper[4857]: I0318 15:21:57.038674 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:21:57 crc kubenswrapper[4857]: I0318 15:21:57.039038 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:21:57 crc kubenswrapper[4857]: I0318 15:21:57.796007 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerStarted","Data":"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b"} Mar 18 15:21:57 crc kubenswrapper[4857]: I0318 15:21:57.827671 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8l68v" podStartSLOduration=4.189165478 podStartE2EDuration="7.827635203s" podCreationTimestamp="2026-03-18 15:21:50 +0000 UTC" firstStartedPulling="2026-03-18 15:21:52.663584863 +0000 UTC m=+4896.792713340" lastFinishedPulling="2026-03-18 15:21:56.302054608 +0000 UTC m=+4900.431183065" observedRunningTime="2026-03-18 15:21:57.819093418 +0000 UTC m=+4901.948221875" watchObservedRunningTime="2026-03-18 15:21:57.827635203 +0000 UTC m=+4901.956763660" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.199542 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564122-85p5g"] Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.203265 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.209099 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.209351 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.210292 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.214853 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564122-85p5g"] Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.225396 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vbc\" (UniqueName: \"kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc\") pod \"auto-csr-approver-29564122-85p5g\" (UID: \"028c0ae2-15f2-486d-b917-9c93255cc572\") " pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.326645 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vbc\" (UniqueName: \"kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc\") pod \"auto-csr-approver-29564122-85p5g\" (UID: \"028c0ae2-15f2-486d-b917-9c93255cc572\") " pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.476607 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vbc\" (UniqueName: \"kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc\") pod \"auto-csr-approver-29564122-85p5g\" (UID: \"028c0ae2-15f2-486d-b917-9c93255cc572\") " pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.530536 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.539442 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.543892 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.559710 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.635183 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.635675 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.732730 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.748467 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.748524 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btkm\" (UniqueName: \"kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.748561 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.851081 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.851404 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btkm\" (UniqueName: \"kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.851446 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.851588 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.852129 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.874054 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btkm\" (UniqueName: \"kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm\") pod \"redhat-marketplace-tvn9f\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:00 crc kubenswrapper[4857]: I0318 15:22:00.894185 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:01 crc kubenswrapper[4857]: I0318 15:22:01.261697 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564122-85p5g"] Mar 18 15:22:01 crc kubenswrapper[4857]: I0318 15:22:01.379138 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:01 crc kubenswrapper[4857]: I0318 15:22:01.820311 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:02 crc kubenswrapper[4857]: I0318 15:22:02.260097 4857 generic.go:334] "Generic (PLEG): container finished" podID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerID="401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb" exitCode=0 Mar 18 15:22:02 crc kubenswrapper[4857]: I0318 15:22:02.260458 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerDied","Data":"401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb"} Mar 18 15:22:02 crc kubenswrapper[4857]: I0318 15:22:02.260494 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerStarted","Data":"9adb01904a5cf52d25775269bfba2032acbb8368c2dbb4613a9364b84c3b61da"} Mar 18 15:22:02 crc kubenswrapper[4857]: I0318 15:22:02.267819 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564122-85p5g" event={"ID":"028c0ae2-15f2-486d-b917-9c93255cc572","Type":"ContainerStarted","Data":"69b3de2c736e255b331b96c15d53c6f3512d3d8ed019c3de265fe40e6c8f4ee1"} Mar 18 15:22:03 crc kubenswrapper[4857]: I0318 15:22:03.073451 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:22:03 crc kubenswrapper[4857]: I0318 15:22:03.277676 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8l68v" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="registry-server" containerID="cri-o://c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b" gracePeriod=2 Mar 18 15:22:03 crc kubenswrapper[4857]: E0318 15:22:03.574089 4857 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.89:59352->38.102.83.89:44309: write tcp 38.102.83.89:59352->38.102.83.89:44309: write: broken pipe Mar 18 15:22:03 crc kubenswrapper[4857]: I0318 15:22:03.952737 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.094883 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities\") pod \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.095716 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk9r9\" (UniqueName: \"kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9\") pod \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.095864 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content\") pod \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\" (UID: \"5c1229cd-03a3-4f80-ad25-df4a3481f58d\") " Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.096256 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities" (OuterVolumeSpecName: "utilities") pod "5c1229cd-03a3-4f80-ad25-df4a3481f58d" (UID: "5c1229cd-03a3-4f80-ad25-df4a3481f58d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.096837 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.108285 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9" (OuterVolumeSpecName: "kube-api-access-qk9r9") pod "5c1229cd-03a3-4f80-ad25-df4a3481f58d" (UID: "5c1229cd-03a3-4f80-ad25-df4a3481f58d"). InnerVolumeSpecName "kube-api-access-qk9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.198315 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk9r9\" (UniqueName: \"kubernetes.io/projected/5c1229cd-03a3-4f80-ad25-df4a3481f58d-kube-api-access-qk9r9\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.200671 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c1229cd-03a3-4f80-ad25-df4a3481f58d" (UID: "5c1229cd-03a3-4f80-ad25-df4a3481f58d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.295588 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564122-85p5g" event={"ID":"028c0ae2-15f2-486d-b917-9c93255cc572","Type":"ContainerStarted","Data":"93778f10811aa25c4e3f67d8f6ef551ebfb7c3244ab82ae755f5f8f2d7e3bac1"} Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.301137 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c1229cd-03a3-4f80-ad25-df4a3481f58d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.304653 4857 generic.go:334] "Generic (PLEG): container finished" podID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerID="c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b" exitCode=0 Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.304787 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8l68v" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.304885 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerDied","Data":"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b"} Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.304956 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8l68v" event={"ID":"5c1229cd-03a3-4f80-ad25-df4a3481f58d","Type":"ContainerDied","Data":"a7b07d66e03843b5d9a8873a5901d459ee18e663f1ea0405b25b8bd1a2ce0730"} Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.305060 4857 scope.go:117] "RemoveContainer" containerID="c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.307452 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerStarted","Data":"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9"} Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.342282 4857 scope.go:117] "RemoveContainer" containerID="2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.371730 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564122-85p5g" podStartSLOduration=2.822726502 podStartE2EDuration="4.371675877s" podCreationTimestamp="2026-03-18 15:22:00 +0000 UTC" firstStartedPulling="2026-03-18 15:22:01.345433545 +0000 UTC m=+4905.474562002" lastFinishedPulling="2026-03-18 15:22:02.89438292 +0000 UTC m=+4907.023511377" observedRunningTime="2026-03-18 15:22:04.325210935 +0000 UTC m=+4908.454339392" watchObservedRunningTime="2026-03-18 15:22:04.371675877 +0000 UTC m=+4908.500804334" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.389108 4857 scope.go:117] "RemoveContainer" containerID="3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.391986 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.404259 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8l68v"] Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.434420 4857 scope.go:117] "RemoveContainer" containerID="c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b" Mar 18 15:22:04 crc kubenswrapper[4857]: E0318 15:22:04.436448 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b\": container with ID starting with c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b not found: ID does not exist" containerID="c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.436508 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b"} err="failed to get container status \"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b\": rpc error: code = NotFound desc = could not find container \"c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b\": container with ID starting with c1f4500fefe7bdfe00c1352bdbbe3aba9648e3696ccbe4e5054e869fad35d98b not found: ID does not exist" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.436541 4857 scope.go:117] "RemoveContainer" containerID="2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71" Mar 18 15:22:04 crc kubenswrapper[4857]: E0318 15:22:04.437660 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71\": container with ID starting with 2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71 not found: ID does not exist" containerID="2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.437728 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71"} err="failed to get container status \"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71\": rpc error: code = NotFound desc = could not find container \"2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71\": container with ID starting with 2a913992709c02330e678cc7fd954b8b08ead82a6ee87a2c5e41230b241cfd71 not found: ID does not exist" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.437790 4857 scope.go:117] "RemoveContainer" containerID="3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3" Mar 18 15:22:04 crc kubenswrapper[4857]: E0318 15:22:04.438507 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3\": container with ID starting with 3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3 not found: ID does not exist" containerID="3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3" Mar 18 15:22:04 crc kubenswrapper[4857]: I0318 15:22:04.438562 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3"} err="failed to get container status \"3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3\": rpc error: code = NotFound desc = could not find container \"3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3\": container with ID starting with 3bcc286a84638b47531fa4489339ac379a7bf236c1f9f9c5feb36102c103e8d3 not found: ID does not exist" Mar 18 15:22:05 crc kubenswrapper[4857]: I0318 15:22:05.184490 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" path="/var/lib/kubelet/pods/5c1229cd-03a3-4f80-ad25-df4a3481f58d/volumes" Mar 18 15:22:05 crc kubenswrapper[4857]: I0318 15:22:05.365616 4857 generic.go:334] "Generic (PLEG): container finished" podID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerID="25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9" exitCode=0 Mar 18 15:22:05 crc kubenswrapper[4857]: I0318 15:22:05.366193 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerDied","Data":"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9"} Mar 18 15:22:05 crc kubenswrapper[4857]: I0318 15:22:05.369489 4857 generic.go:334] "Generic (PLEG): container finished" podID="028c0ae2-15f2-486d-b917-9c93255cc572" containerID="93778f10811aa25c4e3f67d8f6ef551ebfb7c3244ab82ae755f5f8f2d7e3bac1" exitCode=0 Mar 18 15:22:05 crc kubenswrapper[4857]: I0318 15:22:05.369523 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564122-85p5g" event={"ID":"028c0ae2-15f2-486d-b917-9c93255cc572","Type":"ContainerDied","Data":"93778f10811aa25c4e3f67d8f6ef551ebfb7c3244ab82ae755f5f8f2d7e3bac1"} Mar 18 15:22:06 crc kubenswrapper[4857]: I0318 15:22:06.384982 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerStarted","Data":"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da"} Mar 18 15:22:06 crc kubenswrapper[4857]: I0318 15:22:06.430853 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tvn9f" podStartSLOduration=2.650659811 podStartE2EDuration="6.430822111s" podCreationTimestamp="2026-03-18 15:22:00 +0000 UTC" firstStartedPulling="2026-03-18 15:22:02.263209547 +0000 UTC m=+4906.392338004" lastFinishedPulling="2026-03-18 15:22:06.043371837 +0000 UTC m=+4910.172500304" observedRunningTime="2026-03-18 15:22:06.406672342 +0000 UTC m=+4910.535800829" watchObservedRunningTime="2026-03-18 15:22:06.430822111 +0000 UTC m=+4910.559950578" Mar 18 15:22:06 crc kubenswrapper[4857]: I0318 15:22:06.879010 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.190562 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vbc\" (UniqueName: \"kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc\") pod \"028c0ae2-15f2-486d-b917-9c93255cc572\" (UID: \"028c0ae2-15f2-486d-b917-9c93255cc572\") " Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.225766 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc" (OuterVolumeSpecName: "kube-api-access-h8vbc") pod "028c0ae2-15f2-486d-b917-9c93255cc572" (UID: "028c0ae2-15f2-486d-b917-9c93255cc572"). InnerVolumeSpecName "kube-api-access-h8vbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.293854 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vbc\" (UniqueName: \"kubernetes.io/projected/028c0ae2-15f2-486d-b917-9c93255cc572-kube-api-access-h8vbc\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.419041 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564122-85p5g" Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.419916 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564122-85p5g" event={"ID":"028c0ae2-15f2-486d-b917-9c93255cc572","Type":"ContainerDied","Data":"69b3de2c736e255b331b96c15d53c6f3512d3d8ed019c3de265fe40e6c8f4ee1"} Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.420103 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b3de2c736e255b331b96c15d53c6f3512d3d8ed019c3de265fe40e6c8f4ee1" Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.493238 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564116-px58j"] Mar 18 15:22:07 crc kubenswrapper[4857]: I0318 15:22:07.519078 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564116-px58j"] Mar 18 15:22:09 crc kubenswrapper[4857]: I0318 15:22:09.182670 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaff3533-e6de-4d5b-83ff-51559c41d738" path="/var/lib/kubelet/pods/aaff3533-e6de-4d5b-83ff-51559c41d738/volumes" Mar 18 15:22:10 crc kubenswrapper[4857]: I0318 15:22:10.894666 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:10 crc kubenswrapper[4857]: I0318 15:22:10.895101 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:10 crc kubenswrapper[4857]: I0318 15:22:10.978817 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:11 crc kubenswrapper[4857]: I0318 15:22:11.547037 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:11 crc kubenswrapper[4857]: I0318 15:22:11.617968 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:13 crc kubenswrapper[4857]: I0318 15:22:13.494157 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tvn9f" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="registry-server" containerID="cri-o://be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da" gracePeriod=2 Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.124947 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.270340 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities\") pod \"f2b811d4-5362-4838-90fb-593c3eb36ef2\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.270930 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content\") pod \"f2b811d4-5362-4838-90fb-593c3eb36ef2\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.271038 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4btkm\" (UniqueName: \"kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm\") pod \"f2b811d4-5362-4838-90fb-593c3eb36ef2\" (UID: \"f2b811d4-5362-4838-90fb-593c3eb36ef2\") " Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.273687 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities" (OuterVolumeSpecName: "utilities") pod "f2b811d4-5362-4838-90fb-593c3eb36ef2" (UID: "f2b811d4-5362-4838-90fb-593c3eb36ef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.279354 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm" (OuterVolumeSpecName: "kube-api-access-4btkm") pod "f2b811d4-5362-4838-90fb-593c3eb36ef2" (UID: "f2b811d4-5362-4838-90fb-593c3eb36ef2"). InnerVolumeSpecName "kube-api-access-4btkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.309718 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b811d4-5362-4838-90fb-593c3eb36ef2" (UID: "f2b811d4-5362-4838-90fb-593c3eb36ef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.375016 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.375058 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4btkm\" (UniqueName: \"kubernetes.io/projected/f2b811d4-5362-4838-90fb-593c3eb36ef2-kube-api-access-4btkm\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.375071 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b811d4-5362-4838-90fb-593c3eb36ef2-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.710191 4857 generic.go:334] "Generic (PLEG): container finished" podID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerID="be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da" exitCode=0 Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.710250 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerDied","Data":"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da"} Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.710285 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvn9f" event={"ID":"f2b811d4-5362-4838-90fb-593c3eb36ef2","Type":"ContainerDied","Data":"9adb01904a5cf52d25775269bfba2032acbb8368c2dbb4613a9364b84c3b61da"} Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.710305 4857 scope.go:117] "RemoveContainer" containerID="be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.710334 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvn9f" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.774631 4857 scope.go:117] "RemoveContainer" containerID="25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.779953 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.803364 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvn9f"] Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.809659 4857 scope.go:117] "RemoveContainer" containerID="401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.875703 4857 scope.go:117] "RemoveContainer" containerID="be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da" Mar 18 15:22:14 crc kubenswrapper[4857]: E0318 15:22:14.876237 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da\": container with ID starting with be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da not found: ID does not exist" containerID="be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.876314 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da"} err="failed to get container status \"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da\": rpc error: code = NotFound desc = could not find container \"be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da\": container with ID starting with be2726a352f90e236823cbae186df04c92d772c0a16dd6fd13ec469aed3ce3da not found: ID does not exist" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.876353 4857 scope.go:117] "RemoveContainer" containerID="25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9" Mar 18 15:22:14 crc kubenswrapper[4857]: E0318 15:22:14.877110 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9\": container with ID starting with 25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9 not found: ID does not exist" containerID="25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.877148 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9"} err="failed to get container status \"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9\": rpc error: code = NotFound desc = could not find container \"25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9\": container with ID starting with 25e6483899f047f0b3ee1b818ec42ed06fa1cb37cab6647cd5ce7a8bc67b8ad9 not found: ID does not exist" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.877175 4857 scope.go:117] "RemoveContainer" containerID="401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb" Mar 18 15:22:14 crc kubenswrapper[4857]: E0318 15:22:14.877536 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb\": container with ID starting with 401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb not found: ID does not exist" containerID="401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb" Mar 18 15:22:14 crc kubenswrapper[4857]: I0318 15:22:14.877590 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb"} err="failed to get container status \"401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb\": rpc error: code = NotFound desc = could not find container \"401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb\": container with ID starting with 401cb7bec782996ffafc7d9da05307fa65d587ac28508f3dcee82946a7181ffb not found: ID does not exist" Mar 18 15:22:15 crc kubenswrapper[4857]: I0318 15:22:15.178101 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" path="/var/lib/kubelet/pods/f2b811d4-5362-4838-90fb-593c3eb36ef2/volumes" Mar 18 15:22:19 crc kubenswrapper[4857]: I0318 15:22:19.380384 4857 scope.go:117] "RemoveContainer" containerID="ec2f4fd3c9c56adc458498f2a8601c3a4c1b9d8b5005aade8a30721f2fa07058" Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.039035 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.039689 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.039831 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.041594 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.041711 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8" gracePeriod=600 Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.924747 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8" exitCode=0 Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.924940 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8"} Mar 18 15:22:27 crc kubenswrapper[4857]: I0318 15:22:27.925161 4857 scope.go:117] "RemoveContainer" containerID="77ce329ba6a6ef3989bc9a743394d80705f1435b6b99eb5b73fbae7250eb7675" Mar 18 15:22:28 crc kubenswrapper[4857]: I0318 15:22:28.945692 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d"} Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.174174 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564124-dvwdx"] Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.175972 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="extract-utilities" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176014 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="extract-utilities" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176057 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176071 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176104 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="extract-utilities" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176120 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="extract-utilities" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176153 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="extract-content" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176167 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="extract-content" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176212 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176226 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176276 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028c0ae2-15f2-486d-b917-9c93255cc572" containerName="oc" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176289 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="028c0ae2-15f2-486d-b917-9c93255cc572" containerName="oc" Mar 18 15:24:00 crc kubenswrapper[4857]: E0318 15:24:00.176323 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="extract-content" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176337 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="extract-content" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176862 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="028c0ae2-15f2-486d-b917-9c93255cc572" containerName="oc" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176901 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b811d4-5362-4838-90fb-593c3eb36ef2" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.176938 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1229cd-03a3-4f80-ad25-df4a3481f58d" containerName="registry-server" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.178789 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.183553 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.183605 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.183817 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.192580 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564124-dvwdx"] Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.212556 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shmj9\" (UniqueName: \"kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9\") pod \"auto-csr-approver-29564124-dvwdx\" (UID: \"ebe07019-bd4e-4887-970c-a02fdc932f25\") " pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.321923 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shmj9\" (UniqueName: \"kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9\") pod \"auto-csr-approver-29564124-dvwdx\" (UID: \"ebe07019-bd4e-4887-970c-a02fdc932f25\") " pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.355409 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shmj9\" (UniqueName: \"kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9\") pod \"auto-csr-approver-29564124-dvwdx\" (UID: \"ebe07019-bd4e-4887-970c-a02fdc932f25\") " pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:00 crc kubenswrapper[4857]: I0318 15:24:00.786920 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:01 crc kubenswrapper[4857]: I0318 15:24:01.337708 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564124-dvwdx"] Mar 18 15:24:01 crc kubenswrapper[4857]: W0318 15:24:01.349319 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebe07019_bd4e_4887_970c_a02fdc932f25.slice/crio-fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9 WatchSource:0}: Error finding container fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9: Status 404 returned error can't find the container with id fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9 Mar 18 15:24:01 crc kubenswrapper[4857]: I0318 15:24:01.819008 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" event={"ID":"ebe07019-bd4e-4887-970c-a02fdc932f25","Type":"ContainerStarted","Data":"fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9"} Mar 18 15:24:04 crc kubenswrapper[4857]: I0318 15:24:04.891843 4857 generic.go:334] "Generic (PLEG): container finished" podID="ebe07019-bd4e-4887-970c-a02fdc932f25" containerID="093a0923789414a87bb74a2fe4c822274b7ff8491ef259d8e02d451a308a65c5" exitCode=0 Mar 18 15:24:04 crc kubenswrapper[4857]: I0318 15:24:04.891956 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" event={"ID":"ebe07019-bd4e-4887-970c-a02fdc932f25","Type":"ContainerDied","Data":"093a0923789414a87bb74a2fe4c822274b7ff8491ef259d8e02d451a308a65c5"} Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.481488 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.546496 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shmj9\" (UniqueName: \"kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9\") pod \"ebe07019-bd4e-4887-970c-a02fdc932f25\" (UID: \"ebe07019-bd4e-4887-970c-a02fdc932f25\") " Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.553780 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9" (OuterVolumeSpecName: "kube-api-access-shmj9") pod "ebe07019-bd4e-4887-970c-a02fdc932f25" (UID: "ebe07019-bd4e-4887-970c-a02fdc932f25"). InnerVolumeSpecName "kube-api-access-shmj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.651091 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shmj9\" (UniqueName: \"kubernetes.io/projected/ebe07019-bd4e-4887-970c-a02fdc932f25-kube-api-access-shmj9\") on node \"crc\" DevicePath \"\"" Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.921312 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" event={"ID":"ebe07019-bd4e-4887-970c-a02fdc932f25","Type":"ContainerDied","Data":"fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9"} Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.921676 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbac74408fff37a40d0ea138d6d9ab0742908454d519e185c74f4e1ac53c09b9" Mar 18 15:24:06 crc kubenswrapper[4857]: I0318 15:24:06.921389 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564124-dvwdx" Mar 18 15:24:07 crc kubenswrapper[4857]: I0318 15:24:07.608945 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564118-w5rzn"] Mar 18 15:24:07 crc kubenswrapper[4857]: I0318 15:24:07.624123 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564118-w5rzn"] Mar 18 15:24:09 crc kubenswrapper[4857]: I0318 15:24:09.183658 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175ec8bb-145b-4b77-b6f6-2d52e10b5f31" path="/var/lib/kubelet/pods/175ec8bb-145b-4b77-b6f6-2d52e10b5f31/volumes" Mar 18 15:24:19 crc kubenswrapper[4857]: I0318 15:24:19.639974 4857 scope.go:117] "RemoveContainer" containerID="d87469374ce9b44ebf148dd6915ea18f572697acf385bb84a814e235a22a7b2c" Mar 18 15:24:57 crc kubenswrapper[4857]: I0318 15:24:57.039349 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:24:57 crc kubenswrapper[4857]: I0318 15:24:57.039884 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:25:21 crc kubenswrapper[4857]: I0318 15:25:21.502355 4857 trace.go:236] Trace[772935405]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (18-Mar-2026 15:25:02.521) (total time: 18980ms): Mar 18 15:25:21 crc kubenswrapper[4857]: Trace[772935405]: [18.980787473s] [18.980787473s] END Mar 18 15:25:21 crc kubenswrapper[4857]: I0318 15:25:21.508217 4857 trace.go:236] Trace[78291050]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (18-Mar-2026 15:25:01.054) (total time: 20453ms): Mar 18 15:25:21 crc kubenswrapper[4857]: Trace[78291050]: [20.453865992s] [20.453865992s] END Mar 18 15:25:21 crc kubenswrapper[4857]: I0318 15:25:21.553897 4857 trace.go:236] Trace[690307260]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (18-Mar-2026 15:25:06.845) (total time: 14707ms): Mar 18 15:25:21 crc kubenswrapper[4857]: Trace[690307260]: [14.707958976s] [14.707958976s] END Mar 18 15:25:27 crc kubenswrapper[4857]: I0318 15:25:27.038581 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:25:27 crc kubenswrapper[4857]: I0318 15:25:27.039081 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:25:57 crc kubenswrapper[4857]: I0318 15:25:57.045574 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:25:57 crc kubenswrapper[4857]: I0318 15:25:57.046182 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:25:57 crc kubenswrapper[4857]: I0318 15:25:57.046280 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:25:57 crc kubenswrapper[4857]: I0318 15:25:57.047344 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:25:57 crc kubenswrapper[4857]: I0318 15:25:57.047421 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" gracePeriod=600 Mar 18 15:25:57 crc kubenswrapper[4857]: E0318 15:25:57.506095 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:25:58 crc kubenswrapper[4857]: I0318 15:25:58.143181 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" exitCode=0 Mar 18 15:25:58 crc kubenswrapper[4857]: I0318 15:25:58.143255 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d"} Mar 18 15:25:58 crc kubenswrapper[4857]: I0318 15:25:58.143414 4857 scope.go:117] "RemoveContainer" containerID="b19b7b99ca88c11860bda3893fa12a7c55435e55e78d0690f263c40dac127ca8" Mar 18 15:25:58 crc kubenswrapper[4857]: I0318 15:25:58.144830 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:25:58 crc kubenswrapper[4857]: E0318 15:25:58.145468 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.332883 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564126-hh6vc"] Mar 18 15:26:00 crc kubenswrapper[4857]: E0318 15:26:00.335056 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe07019-bd4e-4887-970c-a02fdc932f25" containerName="oc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.335252 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe07019-bd4e-4887-970c-a02fdc932f25" containerName="oc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.335910 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe07019-bd4e-4887-970c-a02fdc932f25" containerName="oc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.337976 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.341444 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.341443 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.343197 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.351504 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564126-hh6vc"] Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.478525 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgpk\" (UniqueName: \"kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk\") pod \"auto-csr-approver-29564126-hh6vc\" (UID: \"e8d06d68-3eb9-4d85-84fd-cd190b48cb48\") " pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.582641 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsgpk\" (UniqueName: \"kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk\") pod \"auto-csr-approver-29564126-hh6vc\" (UID: \"e8d06d68-3eb9-4d85-84fd-cd190b48cb48\") " pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.607638 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsgpk\" (UniqueName: \"kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk\") pod \"auto-csr-approver-29564126-hh6vc\" (UID: \"e8d06d68-3eb9-4d85-84fd-cd190b48cb48\") " pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:00 crc kubenswrapper[4857]: I0318 15:26:00.670590 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:01 crc kubenswrapper[4857]: I0318 15:26:01.385160 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564126-hh6vc"] Mar 18 15:26:01 crc kubenswrapper[4857]: I0318 15:26:01.387673 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:26:02 crc kubenswrapper[4857]: I0318 15:26:02.307178 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" event={"ID":"e8d06d68-3eb9-4d85-84fd-cd190b48cb48","Type":"ContainerStarted","Data":"efe783c1871feced9803da639ff0090137bb74f7502cd52288bfb29f373c8e32"} Mar 18 15:26:03 crc kubenswrapper[4857]: I0318 15:26:03.325927 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" event={"ID":"e8d06d68-3eb9-4d85-84fd-cd190b48cb48","Type":"ContainerStarted","Data":"00314f1bf09b972d525fa6c771c02605fcd82cd197d20f88cc628e109575f9f0"} Mar 18 15:26:03 crc kubenswrapper[4857]: I0318 15:26:03.364846 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" podStartSLOduration=2.171196445 podStartE2EDuration="3.364792971s" podCreationTimestamp="2026-03-18 15:26:00 +0000 UTC" firstStartedPulling="2026-03-18 15:26:01.387314557 +0000 UTC m=+5145.516443014" lastFinishedPulling="2026-03-18 15:26:02.580911073 +0000 UTC m=+5146.710039540" observedRunningTime="2026-03-18 15:26:03.348141741 +0000 UTC m=+5147.477270208" watchObservedRunningTime="2026-03-18 15:26:03.364792971 +0000 UTC m=+5147.493921438" Mar 18 15:26:04 crc kubenswrapper[4857]: I0318 15:26:04.348665 4857 generic.go:334] "Generic (PLEG): container finished" podID="e8d06d68-3eb9-4d85-84fd-cd190b48cb48" containerID="00314f1bf09b972d525fa6c771c02605fcd82cd197d20f88cc628e109575f9f0" exitCode=0 Mar 18 15:26:04 crc kubenswrapper[4857]: I0318 15:26:04.348834 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" event={"ID":"e8d06d68-3eb9-4d85-84fd-cd190b48cb48","Type":"ContainerDied","Data":"00314f1bf09b972d525fa6c771c02605fcd82cd197d20f88cc628e109575f9f0"} Mar 18 15:26:10 crc kubenswrapper[4857]: I0318 15:26:10.164519 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:26:10 crc kubenswrapper[4857]: E0318 15:26:10.165352 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:26:14 crc kubenswrapper[4857]: I0318 15:26:14.364296 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.425499 4857 trace.go:236] Trace[1617162639]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Mar-2026 15:26:13.763) (total time: 2661ms): Mar 18 15:26:16 crc kubenswrapper[4857]: Trace[1617162639]: [2.661688981s] [2.661688981s] END Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.425499 4857 trace.go:236] Trace[294500766]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (18-Mar-2026 15:26:12.791) (total time: 3634ms): Mar 18 15:26:16 crc kubenswrapper[4857]: Trace[294500766]: [3.634400235s] [3.634400235s] END Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.717927 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.747340 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgpk\" (UniqueName: \"kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk\") pod \"e8d06d68-3eb9-4d85-84fd-cd190b48cb48\" (UID: \"e8d06d68-3eb9-4d85-84fd-cd190b48cb48\") " Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.761601 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk" (OuterVolumeSpecName: "kube-api-access-nsgpk") pod "e8d06d68-3eb9-4d85-84fd-cd190b48cb48" (UID: "e8d06d68-3eb9-4d85-84fd-cd190b48cb48"). InnerVolumeSpecName "kube-api-access-nsgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.768313 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" event={"ID":"e8d06d68-3eb9-4d85-84fd-cd190b48cb48","Type":"ContainerDied","Data":"efe783c1871feced9803da639ff0090137bb74f7502cd52288bfb29f373c8e32"} Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.768390 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe783c1871feced9803da639ff0090137bb74f7502cd52288bfb29f373c8e32" Mar 18 15:26:16 crc kubenswrapper[4857]: I0318 15:26:16.768496 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564126-hh6vc" Mar 18 15:26:17 crc kubenswrapper[4857]: I0318 15:26:17.040912 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsgpk\" (UniqueName: \"kubernetes.io/projected/e8d06d68-3eb9-4d85-84fd-cd190b48cb48-kube-api-access-nsgpk\") on node \"crc\" DevicePath \"\"" Mar 18 15:26:17 crc kubenswrapper[4857]: I0318 15:26:17.840089 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564120-tvzt6"] Mar 18 15:26:17 crc kubenswrapper[4857]: I0318 15:26:17.854397 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564120-tvzt6"] Mar 18 15:26:19 crc kubenswrapper[4857]: I0318 15:26:19.181500 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163be1db-a9a5-4188-8d3e-65468f40167e" path="/var/lib/kubelet/pods/163be1db-a9a5-4188-8d3e-65468f40167e/volumes" Mar 18 15:26:19 crc kubenswrapper[4857]: I0318 15:26:19.781460 4857 scope.go:117] "RemoveContainer" containerID="300f5db6e93930ca02507d5295fd5064d04bdf1d42186e032ec6077e70bcfe70" Mar 18 15:26:21 crc kubenswrapper[4857]: I0318 15:26:21.167963 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:26:21 crc kubenswrapper[4857]: E0318 15:26:21.170420 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:26:34 crc kubenswrapper[4857]: I0318 15:26:34.164937 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:26:34 crc kubenswrapper[4857]: E0318 15:26:34.165931 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:26:43 crc kubenswrapper[4857]: I0318 15:26:43.513331 4857 trace.go:236] Trace[1822296851]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (18-Mar-2026 15:26:42.408) (total time: 1104ms): Mar 18 15:26:43 crc kubenswrapper[4857]: Trace[1822296851]: [1.104844128s] [1.104844128s] END Mar 18 15:26:49 crc kubenswrapper[4857]: I0318 15:26:49.165150 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:26:49 crc kubenswrapper[4857]: E0318 15:26:49.166177 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.164067 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:27:04 crc kubenswrapper[4857]: E0318 15:27:04.167049 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.633611 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:04 crc kubenswrapper[4857]: E0318 15:27:04.634423 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d06d68-3eb9-4d85-84fd-cd190b48cb48" containerName="oc" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.634445 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d06d68-3eb9-4d85-84fd-cd190b48cb48" containerName="oc" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.634838 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d06d68-3eb9-4d85-84fd-cd190b48cb48" containerName="oc" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.637641 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.646268 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.752358 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.752439 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhbw\" (UniqueName: \"kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.752506 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.856272 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.856337 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhbw\" (UniqueName: \"kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.856403 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.857457 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.857618 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.892240 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhbw\" (UniqueName: \"kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw\") pod \"community-operators-25d5m\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:04 crc kubenswrapper[4857]: I0318 15:27:04.958728 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.419055 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.421537 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.428898 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.429393 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xwmbd" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.429538 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.431894 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.463261 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.480572 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.480848 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.480929 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481162 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481258 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481449 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481498 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481619 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.481706 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxzt\" (UniqueName: \"kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.577301 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584251 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584351 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584382 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584515 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584566 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584631 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584664 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584719 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.584775 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxzt\" (UniqueName: \"kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.586677 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.587352 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.587869 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.589243 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.594504 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.595806 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.595809 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.606786 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.617271 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxzt\" (UniqueName: \"kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.651780 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " pod="openstack/tempest-tests-tempest" Mar 18 15:27:05 crc kubenswrapper[4857]: I0318 15:27:05.755944 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 18 15:27:06 crc kubenswrapper[4857]: I0318 15:27:06.321366 4857 generic.go:334] "Generic (PLEG): container finished" podID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerID="5358de24663ad8696993fd067ff914e46d47703f86f5de4c0f4cfb4fce27e5dc" exitCode=0 Mar 18 15:27:06 crc kubenswrapper[4857]: I0318 15:27:06.321963 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerDied","Data":"5358de24663ad8696993fd067ff914e46d47703f86f5de4c0f4cfb4fce27e5dc"} Mar 18 15:27:06 crc kubenswrapper[4857]: I0318 15:27:06.322089 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerStarted","Data":"ba0ffe879d62b080cffb58a9367b2e3be26964ba2173d7bb674b0ee370f6d724"} Mar 18 15:27:06 crc kubenswrapper[4857]: I0318 15:27:06.336381 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 18 15:27:07 crc kubenswrapper[4857]: I0318 15:27:07.336667 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18946755-ed18-4d4a-bd99-7bb08f42c91b","Type":"ContainerStarted","Data":"66fefa4133aee0cc7836f6878165dc4aab4883c4bbd5969c4f16279dcde07922"} Mar 18 15:27:07 crc kubenswrapper[4857]: I0318 15:27:07.340795 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerStarted","Data":"d58c1e71dd74f753284a3516208526b998e48d68e5989200bea00f70206a79cf"} Mar 18 15:27:10 crc kubenswrapper[4857]: I0318 15:27:10.411490 4857 generic.go:334] "Generic (PLEG): container finished" podID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerID="d58c1e71dd74f753284a3516208526b998e48d68e5989200bea00f70206a79cf" exitCode=0 Mar 18 15:27:10 crc kubenswrapper[4857]: I0318 15:27:10.411571 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerDied","Data":"d58c1e71dd74f753284a3516208526b998e48d68e5989200bea00f70206a79cf"} Mar 18 15:27:15 crc kubenswrapper[4857]: I0318 15:27:15.302583 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:27:15 crc kubenswrapper[4857]: E0318 15:27:15.305016 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:17 crc kubenswrapper[4857]: I0318 15:27:17.536950 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerStarted","Data":"9996a645b59ffdb7cb7b5b9ff27e3f3877cf8d71f957acfbc40194f5669566ff"} Mar 18 15:27:17 crc kubenswrapper[4857]: I0318 15:27:17.566860 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25d5m" podStartSLOduration=2.959529291 podStartE2EDuration="13.566837369s" podCreationTimestamp="2026-03-18 15:27:04 +0000 UTC" firstStartedPulling="2026-03-18 15:27:06.325298175 +0000 UTC m=+5210.454426652" lastFinishedPulling="2026-03-18 15:27:16.932606273 +0000 UTC m=+5221.061734730" observedRunningTime="2026-03-18 15:27:17.562582722 +0000 UTC m=+5221.691711199" watchObservedRunningTime="2026-03-18 15:27:17.566837369 +0000 UTC m=+5221.695965826" Mar 18 15:27:19 crc kubenswrapper[4857]: I0318 15:27:19.899294 4857 scope.go:117] "RemoveContainer" containerID="a68c7e6cf4bd154695fc69bb73a1cd03b0017bebb78e5960ae8281e4300ca02d" Mar 18 15:27:24 crc kubenswrapper[4857]: I0318 15:27:24.959650 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:24 crc kubenswrapper[4857]: I0318 15:27:24.960369 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:26 crc kubenswrapper[4857]: I0318 15:27:26.019571 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-25d5m" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="registry-server" probeResult="failure" output=< Mar 18 15:27:26 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:27:26 crc kubenswrapper[4857]: > Mar 18 15:27:27 crc kubenswrapper[4857]: I0318 15:27:27.176767 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:27:27 crc kubenswrapper[4857]: E0318 15:27:27.177207 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:35 crc kubenswrapper[4857]: I0318 15:27:35.036854 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:35 crc kubenswrapper[4857]: I0318 15:27:35.107679 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:36 crc kubenswrapper[4857]: I0318 15:27:36.009549 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:36 crc kubenswrapper[4857]: I0318 15:27:36.813505 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25d5m" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="registry-server" containerID="cri-o://9996a645b59ffdb7cb7b5b9ff27e3f3877cf8d71f957acfbc40194f5669566ff" gracePeriod=2 Mar 18 15:27:37 crc kubenswrapper[4857]: I0318 15:27:37.838506 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerDied","Data":"9996a645b59ffdb7cb7b5b9ff27e3f3877cf8d71f957acfbc40194f5669566ff"} Mar 18 15:27:37 crc kubenswrapper[4857]: I0318 15:27:37.838461 4857 generic.go:334] "Generic (PLEG): container finished" podID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerID="9996a645b59ffdb7cb7b5b9ff27e3f3877cf8d71f957acfbc40194f5669566ff" exitCode=0 Mar 18 15:27:40 crc kubenswrapper[4857]: I0318 15:27:40.164806 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:27:40 crc kubenswrapper[4857]: E0318 15:27:40.165838 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:41 crc kubenswrapper[4857]: E0318 15:27:41.772674 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 18 15:27:41 crc kubenswrapper[4857]: E0318 15:27:41.779361 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbxzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(18946755-ed18-4d4a-bd99-7bb08f42c91b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 15:27:41 crc kubenswrapper[4857]: E0318 15:27:41.780645 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="18946755-ed18-4d4a-bd99-7bb08f42c91b" Mar 18 15:27:41 crc kubenswrapper[4857]: E0318 15:27:41.922022 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="18946755-ed18-4d4a-bd99-7bb08f42c91b" Mar 18 15:27:42 crc kubenswrapper[4857]: I0318 15:27:42.331864 4857 scope.go:117] "RemoveContainer" containerID="c3093f2e5f7995ab340944f32036258bf280096df9f826fd8cbcb3278bcc9295" Mar 18 15:27:42 crc kubenswrapper[4857]: I0318 15:27:42.580631 4857 scope.go:117] "RemoveContainer" containerID="b0a3c50fddfb8a04b1c57477401432c9b5a66a4e1fe01af164729bac51c54884" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.285073 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.348047 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content\") pod \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.349024 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddhbw\" (UniqueName: \"kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw\") pod \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.349697 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities\") pod \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\" (UID: \"883ae55f-6b25-49bf-b1f8-0bf8a1175539\") " Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.351930 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities" (OuterVolumeSpecName: "utilities") pod "883ae55f-6b25-49bf-b1f8-0bf8a1175539" (UID: "883ae55f-6b25-49bf-b1f8-0bf8a1175539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.370969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw" (OuterVolumeSpecName: "kube-api-access-ddhbw") pod "883ae55f-6b25-49bf-b1f8-0bf8a1175539" (UID: "883ae55f-6b25-49bf-b1f8-0bf8a1175539"). InnerVolumeSpecName "kube-api-access-ddhbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.455534 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.455598 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddhbw\" (UniqueName: \"kubernetes.io/projected/883ae55f-6b25-49bf-b1f8-0bf8a1175539-kube-api-access-ddhbw\") on node \"crc\" DevicePath \"\"" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.478780 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "883ae55f-6b25-49bf-b1f8-0bf8a1175539" (UID: "883ae55f-6b25-49bf-b1f8-0bf8a1175539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.562082 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883ae55f-6b25-49bf-b1f8-0bf8a1175539-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.941941 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25d5m" event={"ID":"883ae55f-6b25-49bf-b1f8-0bf8a1175539","Type":"ContainerDied","Data":"ba0ffe879d62b080cffb58a9367b2e3be26964ba2173d7bb674b0ee370f6d724"} Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.942359 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25d5m" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.942647 4857 scope.go:117] "RemoveContainer" containerID="9996a645b59ffdb7cb7b5b9ff27e3f3877cf8d71f957acfbc40194f5669566ff" Mar 18 15:27:43 crc kubenswrapper[4857]: I0318 15:27:43.971541 4857 scope.go:117] "RemoveContainer" containerID="d58c1e71dd74f753284a3516208526b998e48d68e5989200bea00f70206a79cf" Mar 18 15:27:44 crc kubenswrapper[4857]: I0318 15:27:44.012081 4857 scope.go:117] "RemoveContainer" containerID="5358de24663ad8696993fd067ff914e46d47703f86f5de4c0f4cfb4fce27e5dc" Mar 18 15:27:44 crc kubenswrapper[4857]: I0318 15:27:44.015184 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:44 crc kubenswrapper[4857]: I0318 15:27:44.028464 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25d5m"] Mar 18 15:27:45 crc kubenswrapper[4857]: I0318 15:27:45.183973 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" path="/var/lib/kubelet/pods/883ae55f-6b25-49bf-b1f8-0bf8a1175539/volumes" Mar 18 15:27:51 crc kubenswrapper[4857]: I0318 15:27:51.164852 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:27:51 crc kubenswrapper[4857]: E0318 15:27:51.165786 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:27:56 crc kubenswrapper[4857]: I0318 15:27:56.654950 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.189119 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18946755-ed18-4d4a-bd99-7bb08f42c91b","Type":"ContainerStarted","Data":"8f4bb54650f0b81b23061d4d2f9b15448fd75788662c0ca363aaef53c2d76b4e"} Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.200655 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564128-rfbbs"] Mar 18 15:28:00 crc kubenswrapper[4857]: E0318 15:28:00.201606 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="registry-server" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.201634 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="registry-server" Mar 18 15:28:00 crc kubenswrapper[4857]: E0318 15:28:00.201656 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="extract-utilities" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.201667 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="extract-utilities" Mar 18 15:28:00 crc kubenswrapper[4857]: E0318 15:28:00.201687 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="extract-content" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.201696 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="extract-content" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.202073 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="883ae55f-6b25-49bf-b1f8-0bf8a1175539" containerName="registry-server" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.203375 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.205550 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.205833 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.207122 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.215926 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564128-rfbbs"] Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.222895 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.899562825 podStartE2EDuration="56.222867327s" podCreationTimestamp="2026-03-18 15:27:04 +0000 UTC" firstStartedPulling="2026-03-18 15:27:06.320306629 +0000 UTC m=+5210.449435096" lastFinishedPulling="2026-03-18 15:27:56.643611131 +0000 UTC m=+5260.772739598" observedRunningTime="2026-03-18 15:28:00.218335903 +0000 UTC m=+5264.347464360" watchObservedRunningTime="2026-03-18 15:28:00.222867327 +0000 UTC m=+5264.351995784" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.277617 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxd79\" (UniqueName: \"kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79\") pod \"auto-csr-approver-29564128-rfbbs\" (UID: \"8e5fbed3-75da-4a41-b46e-12e195588151\") " pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.381457 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxd79\" (UniqueName: \"kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79\") pod \"auto-csr-approver-29564128-rfbbs\" (UID: \"8e5fbed3-75da-4a41-b46e-12e195588151\") " pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.413137 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxd79\" (UniqueName: \"kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79\") pod \"auto-csr-approver-29564128-rfbbs\" (UID: \"8e5fbed3-75da-4a41-b46e-12e195588151\") " pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:00 crc kubenswrapper[4857]: I0318 15:28:00.529795 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:01 crc kubenswrapper[4857]: I0318 15:28:01.091556 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564128-rfbbs"] Mar 18 15:28:01 crc kubenswrapper[4857]: I0318 15:28:01.201240 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" event={"ID":"8e5fbed3-75da-4a41-b46e-12e195588151","Type":"ContainerStarted","Data":"1b99baeefae9dd42f3a7207f7e54cba1b0513954dae59b7f8dab2a2f0083041f"} Mar 18 15:28:03 crc kubenswrapper[4857]: I0318 15:28:03.165895 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:28:03 crc kubenswrapper[4857]: E0318 15:28:03.166470 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:28:03 crc kubenswrapper[4857]: I0318 15:28:03.254918 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" podStartSLOduration=2.131626747 podStartE2EDuration="3.254899109s" podCreationTimestamp="2026-03-18 15:28:00 +0000 UTC" firstStartedPulling="2026-03-18 15:28:01.08350591 +0000 UTC m=+5265.212634367" lastFinishedPulling="2026-03-18 15:28:02.206778282 +0000 UTC m=+5266.335906729" observedRunningTime="2026-03-18 15:28:03.254051998 +0000 UTC m=+5267.383180455" watchObservedRunningTime="2026-03-18 15:28:03.254899109 +0000 UTC m=+5267.384027566" Mar 18 15:28:04 crc kubenswrapper[4857]: I0318 15:28:04.262141 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" event={"ID":"8e5fbed3-75da-4a41-b46e-12e195588151","Type":"ContainerStarted","Data":"8b2505cad8b3bd8f45a8c1c64c413c1b7a6659cc1dbd6c3e92f5fee9220fd56d"} Mar 18 15:28:05 crc kubenswrapper[4857]: I0318 15:28:05.303418 4857 generic.go:334] "Generic (PLEG): container finished" podID="8e5fbed3-75da-4a41-b46e-12e195588151" containerID="8b2505cad8b3bd8f45a8c1c64c413c1b7a6659cc1dbd6c3e92f5fee9220fd56d" exitCode=0 Mar 18 15:28:05 crc kubenswrapper[4857]: I0318 15:28:05.303696 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" event={"ID":"8e5fbed3-75da-4a41-b46e-12e195588151","Type":"ContainerDied","Data":"8b2505cad8b3bd8f45a8c1c64c413c1b7a6659cc1dbd6c3e92f5fee9220fd56d"} Mar 18 15:28:06 crc kubenswrapper[4857]: I0318 15:28:06.822058 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:06 crc kubenswrapper[4857]: I0318 15:28:06.892697 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxd79\" (UniqueName: \"kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79\") pod \"8e5fbed3-75da-4a41-b46e-12e195588151\" (UID: \"8e5fbed3-75da-4a41-b46e-12e195588151\") " Mar 18 15:28:06 crc kubenswrapper[4857]: I0318 15:28:06.900649 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79" (OuterVolumeSpecName: "kube-api-access-vxd79") pod "8e5fbed3-75da-4a41-b46e-12e195588151" (UID: "8e5fbed3-75da-4a41-b46e-12e195588151"). InnerVolumeSpecName "kube-api-access-vxd79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:28:06 crc kubenswrapper[4857]: I0318 15:28:06.997358 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxd79\" (UniqueName: \"kubernetes.io/projected/8e5fbed3-75da-4a41-b46e-12e195588151-kube-api-access-vxd79\") on node \"crc\" DevicePath \"\"" Mar 18 15:28:07 crc kubenswrapper[4857]: I0318 15:28:07.331150 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" event={"ID":"8e5fbed3-75da-4a41-b46e-12e195588151","Type":"ContainerDied","Data":"1b99baeefae9dd42f3a7207f7e54cba1b0513954dae59b7f8dab2a2f0083041f"} Mar 18 15:28:07 crc kubenswrapper[4857]: I0318 15:28:07.331468 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b99baeefae9dd42f3a7207f7e54cba1b0513954dae59b7f8dab2a2f0083041f" Mar 18 15:28:07 crc kubenswrapper[4857]: I0318 15:28:07.331260 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564128-rfbbs" Mar 18 15:28:07 crc kubenswrapper[4857]: I0318 15:28:07.423051 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564122-85p5g"] Mar 18 15:28:07 crc kubenswrapper[4857]: I0318 15:28:07.437825 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564122-85p5g"] Mar 18 15:28:09 crc kubenswrapper[4857]: I0318 15:28:09.179455 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028c0ae2-15f2-486d-b917-9c93255cc572" path="/var/lib/kubelet/pods/028c0ae2-15f2-486d-b917-9c93255cc572/volumes" Mar 18 15:28:16 crc kubenswrapper[4857]: I0318 15:28:16.194462 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:28:16 crc kubenswrapper[4857]: E0318 15:28:16.195265 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:28:30 crc kubenswrapper[4857]: I0318 15:28:30.165255 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:28:30 crc kubenswrapper[4857]: E0318 15:28:30.166435 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:28:41 crc kubenswrapper[4857]: I0318 15:28:41.284632 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:28:41 crc kubenswrapper[4857]: E0318 15:28:41.285444 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:28:42 crc kubenswrapper[4857]: I0318 15:28:42.760185 4857 scope.go:117] "RemoveContainer" containerID="93778f10811aa25c4e3f67d8f6ef551ebfb7c3244ab82ae755f5f8f2d7e3bac1" Mar 18 15:28:43 crc kubenswrapper[4857]: I0318 15:28:43.907130 4857 trace.go:236] Trace[1975601840]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (18-Mar-2026 15:28:42.880) (total time: 1026ms): Mar 18 15:28:43 crc kubenswrapper[4857]: Trace[1975601840]: [1.026488973s] [1.026488973s] END Mar 18 15:28:54 crc kubenswrapper[4857]: I0318 15:28:54.163857 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:28:54 crc kubenswrapper[4857]: E0318 15:28:54.164743 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:29:07 crc kubenswrapper[4857]: I0318 15:29:07.178430 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:29:07 crc kubenswrapper[4857]: E0318 15:29:07.180349 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:29:22 crc kubenswrapper[4857]: I0318 15:29:22.457409 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:29:22 crc kubenswrapper[4857]: E0318 15:29:22.476847 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:29:34 crc kubenswrapper[4857]: I0318 15:29:34.166934 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:29:34 crc kubenswrapper[4857]: E0318 15:29:34.168647 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:29:46 crc kubenswrapper[4857]: I0318 15:29:46.168066 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:29:46 crc kubenswrapper[4857]: E0318 15:29:46.169268 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:29:59 crc kubenswrapper[4857]: I0318 15:29:59.171504 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:29:59 crc kubenswrapper[4857]: E0318 15:29:59.174509 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.940149 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564130-g4dps"] Mar 18 15:30:01 crc kubenswrapper[4857]: E0318 15:30:01.946390 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5fbed3-75da-4a41-b46e-12e195588151" containerName="oc" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.946435 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5fbed3-75da-4a41-b46e-12e195588151" containerName="oc" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.950285 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5fbed3-75da-4a41-b46e-12e195588151" containerName="oc" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.953832 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.960223 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4"] Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.967130 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.971915 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.971907 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.971906 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.971904 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:30:01 crc kubenswrapper[4857]: I0318 15:30:01.971961 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.099118 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8r7b\" (UniqueName: \"kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b\") pod \"auto-csr-approver-29564130-g4dps\" (UID: \"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51\") " pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.099625 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf9h\" (UniqueName: \"kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.099665 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.100678 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.130220 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4"] Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.155245 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564130-g4dps"] Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.207056 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8r7b\" (UniqueName: \"kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b\") pod \"auto-csr-approver-29564130-g4dps\" (UID: \"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51\") " pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.207187 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf9h\" (UniqueName: \"kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.207221 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.208188 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.228898 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.278701 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.279524 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.278560 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:02 crc kubenswrapper[4857]: I0318 15:30:02.279858 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:05 crc kubenswrapper[4857]: I0318 15:30:05.273996 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:10 crc kubenswrapper[4857]: I0318 15:30:10.170923 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:30:10 crc kubenswrapper[4857]: E0318 15:30:10.177212 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:30:15 crc kubenswrapper[4857]: I0318 15:30:15.244236 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.206771 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf9h\" (UniqueName: \"kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.207354 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8r7b\" (UniqueName: \"kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b\") pod \"auto-csr-approver-29564130-g4dps\" (UID: \"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51\") " pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.208163 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume\") pod \"collect-profiles-29564130-rphr4\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.259064 4857 trace.go:236] Trace[291075172]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (18-Mar-2026 15:30:07.937) (total time: 9315ms): Mar 18 15:30:17 crc kubenswrapper[4857]: Trace[291075172]: [9.315251521s] [9.315251521s] END Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.268808 4857 trace.go:236] Trace[422738616]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (18-Mar-2026 15:30:05.886) (total time: 11382ms): Mar 18 15:30:17 crc kubenswrapper[4857]: Trace[422738616]: [11.382393425s] [11.382393425s] END Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.386133 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:30:17 crc kubenswrapper[4857]: I0318 15:30:17.398797 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:30:19 crc kubenswrapper[4857]: I0318 15:30:19.714805 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564130-g4dps"] Mar 18 15:30:19 crc kubenswrapper[4857]: I0318 15:30:19.726112 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4"] Mar 18 15:30:20 crc kubenswrapper[4857]: W0318 15:30:20.505088 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b2c4b59_9fc5_4ec9_9189_60cb1e716f51.slice/crio-c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423 WatchSource:0}: Error finding container c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423: Status 404 returned error can't find the container with id c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423 Mar 18 15:30:20 crc kubenswrapper[4857]: I0318 15:30:20.574241 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" event={"ID":"6f407cca-3a72-46e6-bb51-fdb911d22ea2","Type":"ContainerStarted","Data":"90659e4539a1b4c8bdd06fe3f4b0020a15e7d3164f47aad72877f138308c918b"} Mar 18 15:30:20 crc kubenswrapper[4857]: I0318 15:30:20.578085 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564130-g4dps" event={"ID":"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51","Type":"ContainerStarted","Data":"c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423"} Mar 18 15:30:21 crc kubenswrapper[4857]: I0318 15:30:21.594806 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" event={"ID":"6f407cca-3a72-46e6-bb51-fdb911d22ea2","Type":"ContainerStarted","Data":"e82420db1ebfecc955ea9626bb323208d6692540641db66c7b486b6dc105ceda"} Mar 18 15:30:22 crc kubenswrapper[4857]: I0318 15:30:22.043136 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" podStartSLOduration=21.041065323 podStartE2EDuration="21.041065323s" podCreationTimestamp="2026-03-18 15:30:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 15:30:21.7350257 +0000 UTC m=+5405.864154157" watchObservedRunningTime="2026-03-18 15:30:22.041065323 +0000 UTC m=+5406.170193770" Mar 18 15:30:23 crc kubenswrapper[4857]: I0318 15:30:23.805272 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" event={"ID":"6f407cca-3a72-46e6-bb51-fdb911d22ea2","Type":"ContainerDied","Data":"e82420db1ebfecc955ea9626bb323208d6692540641db66c7b486b6dc105ceda"} Mar 18 15:30:23 crc kubenswrapper[4857]: I0318 15:30:23.805853 4857 generic.go:334] "Generic (PLEG): container finished" podID="6f407cca-3a72-46e6-bb51-fdb911d22ea2" containerID="e82420db1ebfecc955ea9626bb323208d6692540641db66c7b486b6dc105ceda" exitCode=0 Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.152589 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.219172 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.259960 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.260347 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.303062 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.395071 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.394996 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.395170 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.395218 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.793059 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.793133 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.793175 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.793261 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.834042 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.834125 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.841190 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:24 crc kubenswrapper[4857]: I0318 15:30:24.841387 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.426675 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.428941 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: E0318 15:30:25.438020 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.514116 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.596088 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.638956 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.639330 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.639915 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.640477 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.643376 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349"} pod="metallb-system/frr-k8s-xtz2z" containerMessage="Container frr failed liveness probe, will be restarted" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.643888 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" containerID="cri-o://b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349" gracePeriod=2 Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.681102 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.681232 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:25 crc kubenswrapper[4857]: I0318 15:30:25.790575 4857 trace.go:236] Trace[678334250]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (18-Mar-2026 15:30:23.748) (total time: 2028ms): Mar 18 15:30:25 crc kubenswrapper[4857]: Trace[678334250]: [2.028661883s] [2.028661883s] END Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.415174 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.415564 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.423154 4857 patch_prober.go:28] interesting pod/console-89866dfb6-2ckqj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.423221 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-89866dfb6-2ckqj" podUID="20035f78-fe0d-44ce-8f03-aa1bc3bf851b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.484657 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.485081 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.484736 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.484827 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.485177 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.485246 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.484859 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.485341 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.838273 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.838351 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.838268 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.838737 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.851115 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564130-g4dps" event={"ID":"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51","Type":"ContainerStarted","Data":"3056c6d80f0412acc9e13233ec8ba0e3a011b9f4bc53d7744e986f37b7a49a10"} Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.874511 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349" exitCode=143 Mar 18 15:30:26 crc kubenswrapper[4857]: I0318 15:30:26.874918 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349"} Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.351152 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.351248 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.351455 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.351585 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.481984 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.482065 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:27 crc kubenswrapper[4857]: I0318 15:30:27.894520 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"44a0bf32297794f16657df0eb294989afa82f4c2c4fb1cecc873181ef20b6292"} Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.268185 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.268229 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.620408 4857 patch_prober.go:28] interesting pod/metrics-server-6f67489d6c-zwgbg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.620830 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podUID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.633982 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:28 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:28 crc kubenswrapper[4857]: > Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.633989 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:28 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:28 crc kubenswrapper[4857]: > Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.646876 4857 patch_prober.go:28] interesting pod/monitoring-plugin-7fb469cf8-28cd5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.646960 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" podUID="9ae4cfa8-f423-4706-89fa-5d87eec3340c" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.895018 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:28 crc kubenswrapper[4857]: I0318 15:30:28.936163 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.214130 4857 patch_prober.go:28] interesting pod/loki-operator-controller-manager-86c8cb9b45-kxpht container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.214691 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.220437 4857 trace.go:236] Trace[1319845072]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (18-Mar-2026 15:30:27.683) (total time: 1532ms): Mar 18 15:30:29 crc kubenswrapper[4857]: Trace[1319845072]: [1.532452116s] [1.532452116s] END Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.233661 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.275846 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:29 crc kubenswrapper[4857]: I0318 15:30:29.614061 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-hk5gs" podUID="189dc2a2-def0-41c0-9a6d-044db219385c" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.115546 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.115674 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.441392 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.441688 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.460123 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.460216 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.829080 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:30 crc kubenswrapper[4857]: I0318 15:30:30.829074 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.346544 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.351828 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.485445 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.485500 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.485995 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.485894 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.532088 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.533172 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.533168 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.533256 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.536021 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.537479 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:31 crc kubenswrapper[4857]: > Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.787563 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.788229 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:31 crc kubenswrapper[4857]: I0318 15:30:31.788339 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.277090 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.277454 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.277787 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.277877 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.697065 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.697161 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.697278 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.697165 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.772747 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.772782 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.772773 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:32 crc kubenswrapper[4857]: I0318 15:30:32.772773 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.302720 4857 trace.go:236] Trace[973797322]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (18-Mar-2026 15:30:30.534) (total time: 2751ms): Mar 18 15:30:33 crc kubenswrapper[4857]: Trace[973797322]: [2.75107307s] [2.75107307s] END Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.323090 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.406175 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.407743 4857 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gwqfj container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.407810 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podUID="45ebdaa4-576e-40b7-810d-0f4fc570125d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.408036 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.534072 4857 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-4cprr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.534548 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podUID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535030 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535066 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535095 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535167 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535194 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535317 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535343 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.535223 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.753305 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564130-g4dps" podStartSLOduration=31.046462635 podStartE2EDuration="33.750606963s" podCreationTimestamp="2026-03-18 15:30:00 +0000 UTC" firstStartedPulling="2026-03-18 15:30:20.505707154 +0000 UTC m=+5404.634835611" lastFinishedPulling="2026-03-18 15:30:23.209851462 +0000 UTC m=+5407.338979939" observedRunningTime="2026-03-18 15:30:33.736032336 +0000 UTC m=+5417.865160823" watchObservedRunningTime="2026-03-18 15:30:33.750606963 +0000 UTC m=+5417.879735420" Mar 18 15:30:33 crc kubenswrapper[4857]: I0318 15:30:33.777137 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" podUID="18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.780249 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.780913 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.821505 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.904276 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.904291 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.906872 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988248 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988239 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988370 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988444 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988455 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988512 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988548 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988645 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988692 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988713 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988761 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988783 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:34 crc kubenswrapper[4857]: I0318 15:30:34.988746 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.029972 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.030054 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.030872 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.030928 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.030296 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.072023 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" podUID="d567742c-e8c4-4c28-9aae-afb3527cd915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.113203 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.194997 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.195503 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.195004 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podUID="56663366-8771-43d4-b5df-ef9b84b90a74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.278249 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.278285 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.320000 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podUID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.361154 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" podUID="d567742c-e8c4-4c28-9aae-afb3527cd915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.361306 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podUID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.404197 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podUID="ede9ac94-86ad-47ad-9358-4c051ec447cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.444994 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podUID="56663366-8771-43d4-b5df-ef9b84b90a74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.445000 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podUID="ede9ac94-86ad-47ad-9358-4c051ec447cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.445141 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.650133 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.734026 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.735147 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.735213 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.735224 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.735147 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.735278 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.776291 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.816980 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:35 crc kubenswrapper[4857]: I0318 15:30:35.983041 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.064982 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" podUID="bf950907-821d-4d28-a563-f9865d7df7f0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.065393 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" podUID="bf950907-821d-4d28-a563-f9865d7df7f0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225124 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podUID="32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225289 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225331 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225628 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podUID="32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225668 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.225878 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.226053 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.246158 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:30:36 crc kubenswrapper[4857]: E0318 15:30:36.250671 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309004 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309020 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podUID="2cbcf5ed-41b1-4596-8e5d-05212018ba3b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309057 4857 patch_prober.go:28] interesting pod/console-89866dfb6-2ckqj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309145 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podUID="2cbcf5ed-41b1-4596-8e5d-05212018ba3b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309172 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.309213 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-89866dfb6-2ckqj" podUID="20035f78-fe0d-44ce-8f03-aa1bc3bf851b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.315866 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": context deadline exceeded" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.315924 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.315942 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": context deadline exceeded" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.315973 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.484654 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.484668 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.484742 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.484800 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.485915 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.485945 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.486075 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.486106 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.838682 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.838801 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.838689 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:36 crc kubenswrapper[4857]: I0318 15:30:36.838960 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.353893 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.354312 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.354005 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.354395 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.439218 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.974124 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.986581 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:37 crc kubenswrapper[4857]: I0318 15:30:37.991193 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.248892 4857 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-rfczd container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.248921 4857 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-rfczd container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.249004 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" podUID="d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.249086 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" podUID="d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.619285 4857 patch_prober.go:28] interesting pod/metrics-server-6f67489d6c-zwgbg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.619375 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podUID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.619438 4857 patch_prober.go:28] interesting pod/metrics-server-6f67489d6c-zwgbg container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.619516 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podUID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.646710 4857 patch_prober.go:28] interesting pod/monitoring-plugin-7fb469cf8-28cd5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.646872 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" podUID="9ae4cfa8-f423-4706-89fa-5d87eec3340c" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:38 crc kubenswrapper[4857]: I0318 15:30:38.778743 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-tg9wd" podUID="3471c66b-ec38-4efc-b1ab-cbf281f8d424" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.055067 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.055126 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.233182 4857 patch_prober.go:28] interesting pod/loki-operator-controller-manager-86c8cb9b45-kxpht container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.233645 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.274146 4857 patch_prober.go:28] interesting pod/loki-operator-controller-manager-86c8cb9b45-kxpht container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.274229 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.766792 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.766905 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.773982 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:39 crc kubenswrapper[4857]: I0318 15:30:39.775016 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.194573 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.194671 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.297002 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podUID="2fc1a575-873e-43b1-9707-bc6247ec8bbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.358985 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.359075 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.359304 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.359357 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.439710 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.439824 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.461151 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.461380 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.508386 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.508477 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:40 crc kubenswrapper[4857]: I0318 15:30:40.785012 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.160671 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.160843 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.315913 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.316036 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485119 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485230 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485339 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485364 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485587 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485624 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485684 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.485835 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.771154 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:41 crc kubenswrapper[4857]: I0318 15:30:41.772045 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.277009 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.277110 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.277634 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.277675 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.655047 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.656960 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.818635 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.818963 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.819131 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.819231 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.820208 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.823909 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:42 crc kubenswrapper[4857]: I0318 15:30:42.843018 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.109010 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202290 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202384 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202449 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202544 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202587 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.202613 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:43 crc kubenswrapper[4857]: > Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.345351 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.353639 4857 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gwqfj container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.354028 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podUID="45ebdaa4-576e-40b7-810d-0f4fc570125d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.405082 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.405172 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.405097 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.405251 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.530954 4857 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-4cprr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531011 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podUID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531468 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531500 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531732 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531783 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531811 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531816 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531845 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.531858 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.772856 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-tg9wd" podUID="3471c66b-ec38-4efc-b1ab-cbf281f8d424" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.776793 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.776913 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.807502 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"fdb2200b298e4eeb43a92b8bc952f8b97d17c90d6e2667b29c76de9b46119703"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.808026 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerName="ceilometer-central-agent" containerID="cri-o://fdb2200b298e4eeb43a92b8bc952f8b97d17c90d6e2667b29c76de9b46119703" gracePeriod=30 Mar 18 15:30:43 crc kubenswrapper[4857]: I0318 15:30:43.856087 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" podUID="18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.037036 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.037435 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.037087 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.037519 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.088862 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" podUID="b876d788-10af-45fb-95e6-37e7e127249f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.133024 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.133400 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.217194 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.217329 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.220398 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"c5c8064d61732ac35eeddc55f4aea20876b1b6a8841232cfb87d4bad557bf558"} pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.220467 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" containerID="cri-o://c5c8064d61732ac35eeddc55f4aea20876b1b6a8841232cfb87d4bad557bf558" gracePeriod=2 Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.259005 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.258972 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" podUID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.98:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.259221 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.300094 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.342395 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679020 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679022 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679107 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679233 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679326 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679397 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679463 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679501 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679554 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679604 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679725 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679747 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679842 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679526 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.679919 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.686480 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"9e1d771ac94691530ef3bb4ca8c937f2d9df0afbf7d4d30ec5b3a738cd2890a9"} pod="openshift-ingress/router-default-5444994796-xwln7" containerMessage="Container router failed liveness probe, will be restarted" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.686541 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" containerID="cri-o://9e1d771ac94691530ef3bb4ca8c937f2d9df0afbf7d4d30ec5b3a738cd2890a9" gracePeriod=10 Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.762943 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.762943 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763017 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763031 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763031 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763083 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763119 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763395 4857 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-gh9dk container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763416 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.763478 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.764718 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"caef1f058eb721f06b0b8c4e176a7d6041ddc1c103dfe7f18f11b7f718c30210"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.764782 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" podUID="d2aa0233-e26e-477a-adb9-6b281555b255" containerName="package-server-manager" containerID="cri-o://caef1f058eb721f06b0b8c4e176a7d6041ddc1c103dfe7f18f11b7f718c30210" gracePeriod=30 Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.866917 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.866917 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" podUID="d567742c-e8c4-4c28-9aae-afb3527cd915" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.907995 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podUID="56663366-8771-43d4-b5df-ef9b84b90a74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.908044 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d86ecda9-1d3b-4efe-9778-30f3f6803c11" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.908143 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="d86ecda9-1d3b-4efe-9778-30f3f6803c11" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:44 crc kubenswrapper[4857]: I0318 15:30:44.949036 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" podUID="633285e4-04be-48d6-a496-642aa673be88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.031966 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.032048 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podUID="7f57203c-7aa8-4db7-a1f1-973a59e8fb9e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.151123 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" podUID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.232960 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podUID="ede9ac94-86ad-47ad-9358-4c051ec447cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.233339 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.398586 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.399357 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439140 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439183 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439183 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439310 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439374 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439381 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439424 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439448 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.439466 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.462211 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"87c2fe843556a160abbcb53f089f2dbdfdf5f59def26965187ed39139d2835cf"} pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.462294 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" containerID="cri-o://87c2fe843556a160abbcb53f089f2dbdfdf5f59def26965187ed39139d2835cf" gracePeriod=10 Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.645124 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.645972 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.727233 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.727399 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.769096 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podUID="32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.810154 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-qpr5j" podUID="bf950907-821d-4d28-a563-f9865d7df7f0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.125:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.810284 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:45 crc kubenswrapper[4857]: I0318 15:30:45.810318 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.054428 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.054557 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.054656 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.054680 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.230349 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podUID="2cbcf5ed-41b1-4596-8e5d-05212018ba3b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.230398 4857 patch_prober.go:28] interesting pod/console-89866dfb6-2ckqj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.230349 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-7bb4cc7c98-fjhn2" podUID="2cbcf5ed-41b1-4596-8e5d-05212018ba3b" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.101:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.230480 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-89866dfb6-2ckqj" podUID="20035f78-fe0d-44ce-8f03-aa1bc3bf851b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.230593 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.237622 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.237681 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.237727 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.244470 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.244594 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23" gracePeriod=30 Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.272690 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2"} pod="metallb-system/frr-k8s-xtz2z" containerMessage="Container controller failed liveness probe, will be restarted" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.272876 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" containerID="cri-o://de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2" gracePeriod=2 Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.315980 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.316100 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.444190 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.444783 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.444907 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.444227 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.445312 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.445406 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.447820 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.447874 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" containerID="cri-o://aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8" gracePeriod=30 Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.485296 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.485418 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.485923 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.486048 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569039 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569248 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569304 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569310 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569330 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569369 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" podUID="75baf138-7643-4b4f-9919-88edd42aee95" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.100:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.569440 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.688047 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.701139 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.769253 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.772606 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.839275 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.839362 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.839420 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.839307 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.839812 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.840022 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 15:30:46 crc kubenswrapper[4857]: I0318 15:30:46.853854 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"2b8446d8d8d3e8191e29a2bcf3fca537abec08ed645b1d0fafab48986027acaf"} pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.231559 4857 patch_prober.go:28] interesting pod/console-89866dfb6-2ckqj container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.231655 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-89866dfb6-2ckqj" podUID="20035f78-fe0d-44ce-8f03-aa1bc3bf851b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.142:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.328638 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" event={"ID":"7ae3e1fc-2002-4805-bed1-f96339dce3a0","Type":"ContainerDied","Data":"c5c8064d61732ac35eeddc55f4aea20876b1b6a8841232cfb87d4bad557bf558"} Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.332828 4857 generic.go:334] "Generic (PLEG): container finished" podID="7ae3e1fc-2002-4805-bed1-f96339dce3a0" containerID="c5c8064d61732ac35eeddc55f4aea20876b1b6a8841232cfb87d4bad557bf558" exitCode=137 Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.350385 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.350434 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded" start-of-body= Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.350468 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.350489 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.350553 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.351970 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"eba094d46af9955bc10193bf7231c612214121ca94aacd8cc8da5278fb3dde94"} pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.352009 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" containerID="cri-o://eba094d46af9955bc10193bf7231c612214121ca94aacd8cc8da5278fb3dde94" gracePeriod=30 Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.506104 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.506259 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.506583 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.613616 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.927407 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.971862 4857 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-dnrd6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:47 crc kubenswrapper[4857]: I0318 15:30:47.972181 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnrd6" podUID="d4300327-af6f-4261-8973-ef640d24993f" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.168482 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:30:48 crc kubenswrapper[4857]: E0318 15:30:48.170558 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.249257 4857 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-rfczd container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.249344 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" podUID="d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.249439 4857 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-rfczd container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.249458 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-rfczd" podUID="d07b1e5a-d1da-4b39-afd6-3dfc2c49acfa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.74:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.388284 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.388564 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.549088 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" podUID="cf688963-c59d-4667-8589-150c82a1e4d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.127:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.619979 4857 patch_prober.go:28] interesting pod/metrics-server-6f67489d6c-zwgbg container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.620052 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podUID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.86:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.620131 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.621792 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"783ccda3034bcd4060228c662b4bc26ab6b3a9b1ea6187056fac74f230912fb1"} pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" containerMessage="Container metrics-server failed liveness probe, will be restarted" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.621862 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" podUID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerName="metrics-server" containerID="cri-o://783ccda3034bcd4060228c662b4bc26ab6b3a9b1ea6187056fac74f230912fb1" gracePeriod=170 Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.646131 4857 patch_prober.go:28] interesting pod/monitoring-plugin-7fb469cf8-28cd5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.646216 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" podUID="9ae4cfa8-f423-4706-89fa-5d87eec3340c" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.646338 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.843247 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-tg9wd" podUID="3471c66b-ec38-4efc-b1ab-cbf281f8d424" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 18 15:30:48 crc kubenswrapper[4857]: I0318 15:30:48.843435 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 15:30:48 crc kubenswrapper[4857]: E0318 15:30:48.951981 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a9ec00_16b4_4349_a2c6_a2e6397e0ce0.slice/crio-de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2.scope\": RecentStats: unable to find data in memory cache]" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.024244 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.024353 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-pm2jd" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.024257 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.025190 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pm2jd" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.030930 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"b2c3b217e7440954722382b9202cf8ef6b2433c9ac7baff10c85817686662f1b"} pod="metallb-system/speaker-pm2jd" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.031042 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-pm2jd" podUID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerName="speaker" containerID="cri-o://b2c3b217e7440954722382b9202cf8ef6b2433c9ac7baff10c85817686662f1b" gracePeriod=2 Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.121377 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tg9wd" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.212133 4857 patch_prober.go:28] interesting pod/loki-operator-controller-manager-86c8cb9b45-kxpht container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.212261 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" podUID="e5ba6b5a-524d-488a-9435-5fea2c394e6a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.212394 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.383310 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-86c8cb9b45-kxpht" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.507143 4857 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.507567 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.524980 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2" exitCode=137 Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.525042 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"de95367948d4c51881846bc2f7b10462d1108c6a8fb98d687f66da1f28fd1ef2"} Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.646960 4857 patch_prober.go:28] interesting pod/monitoring-plugin-7fb469cf8-28cd5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.647050 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" podUID="9ae4cfa8-f423-4706-89fa-5d87eec3340c" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.87:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.744932 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.745063 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.745024 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": dial tcp 10.217.0.121:8081: connect: connection refused" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.747145 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.766680 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.766723 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="117d706b-860f-4f17-8f2b-5d27b7cdfe61" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.184:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.773746 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.773849 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.773942 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="0e6af46e-8f86-4122-bdaf-8ccec1a76775" containerName="prometheus" probeResult="failure" output="command timed out" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.777412 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:49 crc kubenswrapper[4857]: I0318 15:30:49.777494 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.115899 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.116233 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.116332 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.388205 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podUID="2fc1a575-873e-43b1-9707-bc6247ec8bbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.388254 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-xjwdv" podUID="2fc1a575-873e-43b1-9707-bc6247ec8bbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.440252 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.440543 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.440637 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.461058 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.461124 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.461221 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.544861 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" event={"ID":"75baf138-7643-4b4f-9919-88edd42aee95","Type":"ContainerDied","Data":"87c2fe843556a160abbcb53f089f2dbdfdf5f59def26965187ed39139d2835cf"} Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.544812 4857 generic.go:334] "Generic (PLEG): container finished" podID="75baf138-7643-4b4f-9919-88edd42aee95" containerID="87c2fe843556a160abbcb53f089f2dbdfdf5f59def26965187ed39139d2835cf" exitCode=0 Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.549854 4857 generic.go:334] "Generic (PLEG): container finished" podID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerID="1e2eb1b81a9aa740da28aceccb447206333b981331cd80d9eee81e74fb41fe4b" exitCode=1 Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.549923 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" event={"ID":"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4","Type":"ContainerDied","Data":"1e2eb1b81a9aa740da28aceccb447206333b981331cd80d9eee81e74fb41fe4b"} Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.551194 4857 scope.go:117] "RemoveContainer" containerID="1e2eb1b81a9aa740da28aceccb447206333b981331cd80d9eee81e74fb41fe4b" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.554128 4857 generic.go:334] "Generic (PLEG): container finished" podID="d567742c-e8c4-4c28-9aae-afb3527cd915" containerID="2aa75d7fc73361e725995b470aafe4a651cfb74cde1c4c9e74b4877589f27b37" exitCode=1 Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.554238 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" event={"ID":"d567742c-e8c4-4c28-9aae-afb3527cd915","Type":"ContainerDied","Data":"2aa75d7fc73361e725995b470aafe4a651cfb74cde1c4c9e74b4877589f27b37"} Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.554573 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762"} pod="openstack-operators/openstack-operator-index-8cxcs" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.554622 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" containerID="cri-o://091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" gracePeriod=30 Mar 18 15:30:50 crc kubenswrapper[4857]: I0318 15:30:50.556086 4857 scope.go:117] "RemoveContainer" containerID="2aa75d7fc73361e725995b470aafe4a651cfb74cde1c4c9e74b4877589f27b37" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.117169 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.117520 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.117116 4857 patch_prober.go:28] interesting pod/logging-loki-distributor-9c6b6d984-xjvbj container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.117619 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" podUID="b4256ac3-3896-4c43-8d10-ca5ac43f4991" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.161735 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.161890 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.316315 4857 patch_prober.go:28] interesting pod/thanos-querier-556796c855-jl79p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.316392 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-556796c855-jl79p" podUID="03f7b890-bf37-439b-b604-a3190e5e8b27" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.84:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.359884 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.359953 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.440236 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.440297 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.441550 4857 patch_prober.go:28] interesting pod/logging-loki-query-frontend-ff66c4dc9-82dsb container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.441632 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" podUID="366a3cfc-7c2d-4212-a16d-2415868b12ba" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.460887 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.460986 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.461971 4857 patch_prober.go:28] interesting pod/logging-loki-querier-6dcbdf8bb8-jp89f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.462106 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" podUID="64c46410-682b-49b0-9aa2-8f223a69165b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.484987 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485009 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485080 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485096 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485180 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485200 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485254 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.485293 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.567578 4857 generic.go:334] "Generic (PLEG): container finished" podID="d2cd8f0d-237c-4db5-b2c6-31c6d99018e4" containerID="a464ec604042f508d8b045a49490f4658b7920b814360845da26813df92ad952" exitCode=1 Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.567655 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" event={"ID":"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4","Type":"ContainerDied","Data":"a464ec604042f508d8b045a49490f4658b7920b814360845da26813df92ad952"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.569080 4857 scope.go:117] "RemoveContainer" containerID="a464ec604042f508d8b045a49490f4658b7920b814360845da26813df92ad952" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.577561 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"30ba20a1cb93c2fc96f0ddff0a0151e036e3ff3dff508c38420fcf5eb60c5c42"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.577893 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.587352 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" event={"ID":"7ae3e1fc-2002-4805-bed1-f96339dce3a0","Type":"ContainerStarted","Data":"5b3a564d5189e66b0474d718e1eff9f49e308d3141974c9157cb4ea11d5311e2"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.587494 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.592737 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.604291 4857 generic.go:334] "Generic (PLEG): container finished" podID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerID="aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8" exitCode=0 Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.604375 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerDied","Data":"aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.611702 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" event={"ID":"75baf138-7643-4b4f-9919-88edd42aee95","Type":"ContainerStarted","Data":"941d9adea20fc9439244fbd30730dd3a59460ba69d0740329a21923a894e49a4"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.612528 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.614615 4857 generic.go:334] "Generic (PLEG): container finished" podID="d2aa0233-e26e-477a-adb9-6b281555b255" containerID="caef1f058eb721f06b0b8c4e176a7d6041ddc1c103dfe7f18f11b7f718c30210" exitCode=0 Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.614652 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" event={"ID":"d2aa0233-e26e-477a-adb9-6b281555b255","Type":"ContainerDied","Data":"caef1f058eb721f06b0b8c4e176a7d6041ddc1c103dfe7f18f11b7f718c30210"} Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.771536 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.771917 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.773305 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.773352 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.773688 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"dafc1fcd5799591aa908ce0bf0bc189cc3f522c9960cc3e0575755e1b1b634e6"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.777104 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.799590 4857 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.799690 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5081975d-5c3d-4788-b5e1-cd21e4fa3852" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.955134 4857 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:51 crc kubenswrapper[4857]: I0318 15:30:51.955234 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="4da2f7e2-d9d9-42ff-b7b7-a129541ecc39" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.161267 4857 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.161313 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="8fbde296-bf61-4d05-bf29-e27b5b58c150" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.280258 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.280774 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.280893 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.281098 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.281162 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.281219 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.292682 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"23adb73f740dbf82d50af3ac9a84d6751f75602c16a4ad609ec63adf6b75f7f4"} pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" containerMessage="Container operator failed liveness probe, will be restarted" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.292771 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" containerID="cri-o://23adb73f740dbf82d50af3ac9a84d6751f75602c16a4ad609ec63adf6b75f7f4" gracePeriod=30 Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.484391 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.484504 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-w5jpj" podUID="206851e1-412e-4888-9635-f8eca5aa579e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.486661 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.486735 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.632101 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" event={"ID":"d2aa0233-e26e-477a-adb9-6b281555b255","Type":"ContainerStarted","Data":"e72c74dcadc025f0f7ae19d53cc15eac060148eab2c6cd5560e750d52febf14d"} Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.632398 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.634890 4857 generic.go:334] "Generic (PLEG): container finished" podID="a73a34ce-a354-406b-ac7a-68b7f5aaf95b" containerID="b2c3b217e7440954722382b9202cf8ef6b2433c9ac7baff10c85817686662f1b" exitCode=137 Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.634969 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm2jd" event={"ID":"a73a34ce-a354-406b-ac7a-68b7f5aaf95b","Type":"ContainerDied","Data":"b2c3b217e7440954722382b9202cf8ef6b2433c9ac7baff10c85817686662f1b"} Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.637585 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" event={"ID":"d567742c-e8c4-4c28-9aae-afb3527cd915","Type":"ContainerStarted","Data":"5ad4f6e14a95423465fdebdabaf0b1a51f3c80cd05fbca164e1ac8fc483125a2"} Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.638779 4857 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" containerID="cri-o://2aa75d7fc73361e725995b470aafe4a651cfb74cde1c4c9e74b4877589f27b37" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.639037 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.653339 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.653341 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.653474 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.653566 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.653429 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.771244 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.772244 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.772302 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.772943 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.773006 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.774063 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.776064 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.776127 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.776317 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.776514 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.777286 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c"} pod="openshift-marketplace/community-operators-89qls" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.777353 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" containerID="cri-o://6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c" gracePeriod=30 Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.799487 4857 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.62:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.799926 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="5081975d-5c3d-4788-b5e1-cd21e4fa3852" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.955802 4857 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" start-of-body= Mar 18 15:30:52 crc kubenswrapper[4857]: I0318 15:30:52.955880 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="4da2f7e2-d9d9-42ff-b7b7-a129541ecc39" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": context deadline exceeded" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.187856 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mrtkc" podUID="2d9b7b6d-9b28-4a50-8bda-458c3f8088c1" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.45:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407220 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407530 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407414 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407475 4857 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gwqfj container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407682 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407678 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407729 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podUID="45ebdaa4-576e-40b7-810d-0f4fc570125d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407577 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407620 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407961 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407981 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.407972 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.408062 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.408141 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.409521 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"7cf4263fc09db517fa7fbc5e6ab371239d02542068443b3c0f92cca335fc1134"} pod="openshift-console-operator/console-operator-58897d9998-k6kp8" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.409568 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" containerID="cri-o://7cf4263fc09db517fa7fbc5e6ab371239d02542068443b3c0f92cca335fc1134" gracePeriod=30 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.449001 4857 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-4cprr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.449059 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podUID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.449111 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.450591 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"6e8e747879c1f7edeefab0b852d7eecc80f7f85fd951ba4cb56a6f5e360a9588"} pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.450641 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" podUID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerName="authentication-operator" containerID="cri-o://6e8e747879c1f7edeefab0b852d7eecc80f7f85fd951ba4cb56a6f5e360a9588" gracePeriod=30 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.504356 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.504478 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.504354 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.504551 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.504572 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.506250 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"153d5363065a7645ab084bbd0be5c6de25c6aa2c2b518991bdeb1ea84bd0509d"} pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" containerMessage="Container controller-manager failed liveness probe, will be restarted" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.506303 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" containerID="cri-o://153d5363065a7645ab084bbd0be5c6de25c6aa2c2b518991bdeb1ea84bd0509d" gracePeriod=30 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.652688 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerStarted","Data":"25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4"} Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.653308 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.653358 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.653428 4857 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" containerID="cri-o://aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.653443 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.655691 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" event={"ID":"f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4","Type":"ContainerStarted","Data":"8c82a9b6d5823de3d8cc5edeef99499e40854265b62d74f7e50e6190a1865bb9"} Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.655897 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.658731 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" event={"ID":"d2cd8f0d-237c-4db5-b2c6-31c6d99018e4","Type":"ContainerStarted","Data":"e520261287cc1e4f4096e64fb41c91bca3840e93019df2893e876138fefb4538"} Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.658905 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.662194 4857 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23" exitCode=0 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.662270 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"699d41a3961e93ddf34ed3767e444d407abc986012a5d1ade2f0d45817e5bc23"} Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.662607 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.695022 4857 patch_prober.go:28] interesting pod/perses-operator-6c9d87fc97-ddtxj container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.695145 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" podUID="79d3df2c-25f0-4e16-a39d-cc0d6a85277f" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.775983 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.776109 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.776143 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.776188 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777274 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777308 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777370 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777393 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777327 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output="command timed out" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777427 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777536 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.777618 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.778942 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" podUID="18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.778960 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f"} pod="openshift-marketplace/redhat-marketplace-f9sl8" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.779010 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.779007 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" containerID="cri-o://651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" gracePeriod=30 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.780839 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275"} pod="openshift-marketplace/certified-operators-zl78l" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.780895 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" containerID="cri-o://7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275" gracePeriod=30 Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.802277 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:53 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:53 crc kubenswrapper[4857]: > Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.804032 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.807645 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.812199 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.812700 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.813702 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.813820 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.814644 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:53 crc kubenswrapper[4857]: E0318 15:30:53.814703 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" Mar 18 15:30:53 crc kubenswrapper[4857]: I0318 15:30:53.907152 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037069 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037131 4857 patch_prober.go:28] interesting pod/downloads-7954f5f757-gvkpz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037136 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037163 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": EOF" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037168 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gvkpz" podUID="ef638f17-5999-467e-b170-8ef20068e451" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.037207 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": EOF" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.359364 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.387007 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.387007 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.469038 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" podUID="73a9b06c-5f5c-46f7-9548-28c5a9513a95" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.469074 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.552006 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" podUID="b876d788-10af-45fb-95e6-37e7e127249f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.552019 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.552224 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-smknr" podUID="b876d788-10af-45fb-95e6-37e7e127249f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.592970 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.593065 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.801934 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-ptv8b" podUID="8ffb9263-05b9-447d-a332-31f5f3312ea9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.816386 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" podUID="18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.816494 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.816515 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.816546 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.826282 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"a97d665e87dca706b2c7c7dfdea0091b04fee35c6af3d47ca266f428853c7d27"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.826371 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" containerID="cri-o://a97d665e87dca706b2c7c7dfdea0091b04fee35c6af3d47ca266f428853c7d27" gracePeriod=30 Mar 18 15:30:54 crc kubenswrapper[4857]: E0318 15:30:54.830512 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:54 crc kubenswrapper[4857]: E0318 15:30:54.833145 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:54 crc kubenswrapper[4857]: E0318 15:30:54.849422 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:54 crc kubenswrapper[4857]: E0318 15:30:54.849504 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-8cxcs" podUID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerName="registry-server" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858154 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858209 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858257 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858288 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858313 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858360 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858414 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" podUID="e160f13b-785a-46a2-adb4-fa92ce7c6ab7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858546 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858646 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-fvz4f" podUID="cffafd39-a112-46ab-becf-ad58facd5712" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.858310 4857 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-gwqfj container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.859046 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" podUID="45ebdaa4-576e-40b7-810d-0f4fc570125d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.91:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860103 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"1f63aefa15bfd32c6e0413b7646b41031ec9ec2b0ba15c783c0bca7d09de4af6"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" containerMessage="Container olm-operator failed liveness probe, will be restarted" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860138 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" containerID="cri-o://1f63aefa15bfd32c6e0413b7646b41031ec9ec2b0ba15c783c0bca7d09de4af6" gracePeriod=30 Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860670 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860819 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860971 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.860997 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861031 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" podUID="01c6ffec-b474-4bfb-a282-484214bea129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861174 4857 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-298nc container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861216 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-298nc" podUID="a977ae9e-847e-402e-ba1f-b716811ee998" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861662 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861686 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.861731 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.870828 4857 generic.go:334] "Generic (PLEG): container finished" podID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerID="eba094d46af9955bc10193bf7231c612214121ca94aacd8cc8da5278fb3dde94" exitCode=0 Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.872538 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" event={"ID":"0d61789c-ee3d-4aff-99a1-592b91b773c6","Type":"ContainerDied","Data":"eba094d46af9955bc10193bf7231c612214121ca94aacd8cc8da5278fb3dde94"} Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.875081 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb"} pod="openshift-marketplace/redhat-operators-b7qbr" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.875137 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" containerID="cri-o://408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" gracePeriod=30 Mar 18 15:30:54 crc kubenswrapper[4857]: I0318 15:30:54.943218 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.024978 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podUID="56663366-8771-43d4-b5df-ef9b84b90a74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.025064 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="d86ecda9-1d3b-4efe-9778-30f3f6803c11" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.025118 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.025153 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="d86ecda9-1d3b-4efe-9778-30f3f6803c11" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.7:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.108009 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" podUID="633285e4-04be-48d6-a496-642aa673be88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.108020 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.108205 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.273991 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" podUID="56663366-8771-43d4-b5df-ef9b84b90a74" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.273991 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podUID="7f57203c-7aa8-4db7-a1f1-973a59e8fb9e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.274320 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-9m5mv" podUID="633285e4-04be-48d6-a496-642aa673be88" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.274690 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fqnq2" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.357004 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.357096 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" podUID="f86c8f25-0e6c-4911-87f8-7ff89a25a040" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.357215 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.357032 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podUID="ede9ac94-86ad-47ad-9358-4c051ec447cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.357528 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.398030 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-8glm4" podUID="7f57203c-7aa8-4db7-a1f1-973a59e8fb9e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.398206 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.398266 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.404068 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"44a0bf32297794f16657df0eb294989afa82f4c2c4fb1cecc873181ef20b6292"} pod="metallb-system/frr-k8s-xtz2z" containerMessage="Container frr failed liveness probe, will be restarted" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.404328 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" containerID="cri-o://44a0bf32297794f16657df0eb294989afa82f4c2c4fb1cecc873181ef20b6292" gracePeriod=2 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.482805 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.483853 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" podUID="ede9ac94-86ad-47ad-9358-4c051ec447cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.484632 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" podUID="ffdcecae-8dae-48b2-84d8-73deac76eeca" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.498928 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-89866dfb6-2ckqj" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.621090 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": read tcp 10.217.0.2:35180->10.217.0.24:8443: read: connection reset by peer" start-of-body= Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.621157 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": read tcp 10.217.0.2:35180->10.217.0.24:8443: read: connection reset by peer" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.621227 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podUID="32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.786066 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" podUID="32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.786205 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.786489 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.787227 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-dmrdv" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.787547 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.788326 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" podUID="bdf23497-4141-4f8f-859a-0d1e4f8c80f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.124:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.788381 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" podUID="18b73b64-9eec-426b-86eb-6a1045a9d25c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.126:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.788744 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-xnh2t" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.888523 4857 generic.go:334] "Generic (PLEG): container finished" podID="e4e4af7c-f5d3-4b12-b419-70dbae8cab23" containerID="6e8e747879c1f7edeefab0b852d7eecc80f7f85fd951ba4cb56a6f5e360a9588" exitCode=0 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.888629 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" event={"ID":"e4e4af7c-f5d3-4b12-b419-70dbae8cab23","Type":"ContainerDied","Data":"6e8e747879c1f7edeefab0b852d7eecc80f7f85fd951ba4cb56a6f5e360a9588"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.893245 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm2jd" event={"ID":"a73a34ce-a354-406b-ac7a-68b7f5aaf95b","Type":"ContainerStarted","Data":"f24e2de864abc6e8e796bb8cb0fd31df252e7cbc35a969b884694dadbf62ce20"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.893489 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pm2jd" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.907295 4857 generic.go:334] "Generic (PLEG): container finished" podID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerID="44a0bf32297794f16657df0eb294989afa82f4c2c4fb1cecc873181ef20b6292" exitCode=143 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.907511 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerDied","Data":"44a0bf32297794f16657df0eb294989afa82f4c2c4fb1cecc873181ef20b6292"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.907596 4857 scope.go:117] "RemoveContainer" containerID="b7857934cf8b3a82cf9a076e3ee6ff536128dafe5cf97349559f7069d2e10349" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.913419 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-xwln7_188cb24d-b3cf-46dd-8a07-12afe6ea75e0/router/0.log" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.913488 4857 generic.go:334] "Generic (PLEG): container finished" podID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerID="9e1d771ac94691530ef3bb4ca8c937f2d9df0afbf7d4d30ec5b3a738cd2890a9" exitCode=137 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.913591 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwln7" event={"ID":"188cb24d-b3cf-46dd-8a07-12afe6ea75e0","Type":"ContainerDied","Data":"9e1d771ac94691530ef3bb4ca8c937f2d9df0afbf7d4d30ec5b3a738cd2890a9"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.927147 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-k6kp8_2e10ef1d-7c47-45d3-b16d-1ac7adccadbd/console-operator/0.log" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.927258 4857 generic.go:334] "Generic (PLEG): container finished" podID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerID="7cf4263fc09db517fa7fbc5e6ab371239d02542068443b3c0f92cca335fc1134" exitCode=1 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.927363 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" event={"ID":"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd","Type":"ContainerDied","Data":"7cf4263fc09db517fa7fbc5e6ab371239d02542068443b3c0f92cca335fc1134"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.931593 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" event={"ID":"0d61789c-ee3d-4aff-99a1-592b91b773c6","Type":"ContainerStarted","Data":"9cadc3b5d3df564f11fd1ac29963a641459cbf9cc7ecf27021110da129e19435"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.931649 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.941161 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-85tjg_8ad51d9d-dcd1-467e-9aa6-162d19c035ed/olm-operator/0.log" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.941217 4857 generic.go:334] "Generic (PLEG): container finished" podID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerID="1f63aefa15bfd32c6e0413b7646b41031ec9ec2b0ba15c783c0bca7d09de4af6" exitCode=2 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.941304 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" event={"ID":"8ad51d9d-dcd1-467e-9aa6-162d19c035ed","Type":"ContainerDied","Data":"1f63aefa15bfd32c6e0413b7646b41031ec9ec2b0ba15c783c0bca7d09de4af6"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.949118 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-c867bfcc4-nc2bq_3d4741b7-1f3f-405d-b675-d0141044421a/controller-manager/0.log" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.949173 4857 generic.go:334] "Generic (PLEG): container finished" podID="3d4741b7-1f3f-405d-b675-d0141044421a" containerID="153d5363065a7645ab084bbd0be5c6de25c6aa2c2b518991bdeb1ea84bd0509d" exitCode=0 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.949394 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerDied","Data":"153d5363065a7645ab084bbd0be5c6de25c6aa2c2b518991bdeb1ea84bd0509d"} Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964058 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964442 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964489 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964530 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964554 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.964696 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.965179 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.965211 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.966814 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"fc1203b3a729cafe8010b8e3d66f285038ac11e2ffbb80649c81c48d7c75d1c6"} pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.966858 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" containerID="cri-o://fc1203b3a729cafe8010b8e3d66f285038ac11e2ffbb80649c81c48d7c75d1c6" gracePeriod=30 Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.988452 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-86872" Mar 18 15:30:55 crc kubenswrapper[4857]: I0318 15:30:55.996786 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.152177 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" podUID="2d1893e2-6251-42ef-82d7-529e1f27ec4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.197562 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-grt7j" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.302554 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-l4h6z" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.351958 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.352021 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.417213 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:30:56 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:30:56 crc kubenswrapper[4857]: > Mar 18 15:30:56 crc kubenswrapper[4857]: E0318 15:30:56.428918 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:56 crc kubenswrapper[4857]: E0318 15:30:56.433975 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:56 crc kubenswrapper[4857]: E0318 15:30:56.437858 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:56 crc kubenswrapper[4857]: E0318 15:30:56.437914 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.529033 4857 scope.go:117] "RemoveContainer" containerID="ecef915baadddc5638b2a49af94ce7de689e1de03537ff50f0d15736ee7ca79a" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.877975 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.878454 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.891860 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-f84d7fd4f-mpg2d" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.982440 4857 generic.go:334] "Generic (PLEG): container finished" podID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerID="bee0c79f6d5dfa80a3f7716eee333ebaabbd1f86cbf3e251968dd65a623d6623" exitCode=1 Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.982510 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" event={"ID":"fdc9df02-49d3-4a40-ba9c-d6ef085abb04","Type":"ContainerDied","Data":"bee0c79f6d5dfa80a3f7716eee333ebaabbd1f86cbf3e251968dd65a623d6623"} Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.983516 4857 scope.go:117] "RemoveContainer" containerID="bee0c79f6d5dfa80a3f7716eee333ebaabbd1f86cbf3e251968dd65a623d6623" Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.997718 4857 generic.go:334] "Generic (PLEG): container finished" podID="bd585d57-f586-4b7b-8c56-be04591b6bdd" containerID="091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762" exitCode=0 Mar 18 15:30:56 crc kubenswrapper[4857]: I0318 15:30:56.997876 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8cxcs" event={"ID":"bd585d57-f586-4b7b-8c56-be04591b6bdd","Type":"ContainerDied","Data":"091fc13d95a837dc22b07246e2350f2864208bb2ab841e9a5936d4d95b2b6762"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.026808 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-xwln7_188cb24d-b3cf-46dd-8a07-12afe6ea75e0/router/0.log" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.027114 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xwln7" event={"ID":"188cb24d-b3cf-46dd-8a07-12afe6ea75e0","Type":"ContainerStarted","Data":"48146e9c91938f14e805331abf15160c3d45864a624f74bd31586ca54606bc0b"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.067024 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-k6kp8_2e10ef1d-7c47-45d3-b16d-1ac7adccadbd/console-operator/0.log" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.067638 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" event={"ID":"2e10ef1d-7c47-45d3-b16d-1ac7adccadbd","Type":"ContainerStarted","Data":"50a1c247fd18778a04e5e6fe17186102ad112adb521f39c9ceb9ea32d1293588"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.068898 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.068967 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.068997 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.306732 4857 generic.go:334] "Generic (PLEG): container finished" podID="97a08b04-cfff-4c38-90d4-aa20b69ade73" containerID="fdb2200b298e4eeb43a92b8bc952f8b97d17c90d6e2667b29c76de9b46119703" exitCode=0 Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.314113 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.314204 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.320392 4857 generic.go:334] "Generic (PLEG): container finished" podID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerID="a97d665e87dca706b2c7c7dfdea0091b04fee35c6af3d47ca266f428853c7d27" exitCode=0 Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.352224 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.352289 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.356801 4857 generic.go:334] "Generic (PLEG): container finished" podID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerID="23adb73f740dbf82d50af3ac9a84d6751f75602c16a4ad609ec63adf6b75f7f4" exitCode=0 Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.362788 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.362841 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.365029 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.365108 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.366413 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerDied","Data":"fdb2200b298e4eeb43a92b8bc952f8b97d17c90d6e2667b29c76de9b46119703"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.366466 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" event={"ID":"3387b870-2054-4e0f-97b6-4af4f37bf34d","Type":"ContainerDied","Data":"a97d665e87dca706b2c7c7dfdea0091b04fee35c6af3d47ca266f428853c7d27"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.366492 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" event={"ID":"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59","Type":"ContainerDied","Data":"23adb73f740dbf82d50af3ac9a84d6751f75602c16a4ad609ec63adf6b75f7f4"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.366515 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.366533 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.377066 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" event={"ID":"3d4741b7-1f3f-405d-b675-d0141044421a","Type":"ContainerStarted","Data":"b1a90f7a8559ab0f43770766ae61df2015357114c7f850cc44f4e326c58efd1e"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.377997 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.378242 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.378312 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.405776 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4cprr" event={"ID":"e4e4af7c-f5d3-4b12-b419-70dbae8cab23","Type":"ContainerStarted","Data":"f7b31cba2e9e6c63793517e9c4b32feb40ddabe08cc71611b8823f2c3f43c175"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.444561 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xtz2z" event={"ID":"30a9ec00-16b4-4349-a2c6-a2e6397e0ce0","Type":"ContainerStarted","Data":"e27612fede701f2d44a1e4bdb1b7c22b04e7ae133c76408edafc6487fbab0ebe"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.468137 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"05a3fcd78bd378c50a7c98d67ede7ed672e512f720b959c8021042bf1c9a33f0"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.468436 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.488066 4857 generic.go:334] "Generic (PLEG): container finished" podID="18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5" containerID="bf95aab027aa704d08f31e729b95d84256ac22571160b79a364ea72ca7f8906a" exitCode=1 Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.488199 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" event={"ID":"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5","Type":"ContainerDied","Data":"bf95aab027aa704d08f31e729b95d84256ac22571160b79a364ea72ca7f8906a"} Mar 18 15:30:57 crc kubenswrapper[4857]: E0318 15:30:57.488215 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c is running failed: container process not found" containerID="6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.489362 4857 scope.go:117] "RemoveContainer" containerID="bf95aab027aa704d08f31e729b95d84256ac22571160b79a364ea72ca7f8906a" Mar 18 15:30:57 crc kubenswrapper[4857]: E0318 15:30:57.493282 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c is running failed: container process not found" containerID="6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:57 crc kubenswrapper[4857]: E0318 15:30:57.510342 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c is running failed: container process not found" containerID="6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:57 crc kubenswrapper[4857]: E0318 15:30:57.510427 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.530877 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-85tjg_8ad51d9d-dcd1-467e-9aa6-162d19c035ed/olm-operator/0.log" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.531886 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" event={"ID":"8ad51d9d-dcd1-467e-9aa6-162d19c035ed","Type":"ContainerStarted","Data":"45e1024b8313698a484fc430641083f24d650ae98b6ec755d984108acf2c1522"} Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.532634 4857 patch_prober.go:28] interesting pod/route-controller-manager-6f7f765496-hksv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.532676 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" podUID="0d61789c-ee3d-4aff-99a1-592b91b773c6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.533250 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.535900 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 15:30:57 crc kubenswrapper[4857]: I0318 15:30:57.535951 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.462952 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.463165 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.487682 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f is running failed: container process not found" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.492733 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f is running failed: container process not found" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.504216 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f is running failed: container process not found" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.504306 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.505048 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7fb469cf8-28cd5" Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.562671 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb is running failed: container process not found" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.590152 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb is running failed: container process not found" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.638078 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb is running failed: container process not found" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.638178 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.685778 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8cxcs" event={"ID":"bd585d57-f586-4b7b-8c56-be04591b6bdd","Type":"ContainerStarted","Data":"d31fdbd2af1914af0c1cf7e9f229b633cc1769fde083b46ef85f9af365e0305b"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.699693 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" event={"ID":"3387b870-2054-4e0f-97b6-4af4f37bf34d","Type":"ContainerStarted","Data":"2142f392bf25c71137c027d3b5c2a2977ac2a5ae1d49f8d648ed8b01ad95d3d8"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.718851 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.718934 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.743073 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97a08b04-cfff-4c38-90d4-aa20b69ade73","Type":"ContainerStarted","Data":"532338094e75c688d694882e66ff680eec7ae8fd02ce113c02fd270cb1f41d3f"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.778354 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" event={"ID":"fdc9df02-49d3-4a40-ba9c-d6ef085abb04","Type":"ContainerStarted","Data":"75e7d5b3dbc351648c51f6944759fe1f582375812fc3fed5603bb6ce5653a088"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.779664 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.779707 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275 is running failed: container process not found" containerID="7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.781611 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275 is running failed: container process not found" containerID="7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.782217 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275 is running failed: container process not found" containerID="7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275" cmd=["grpc_health_probe","-addr=:50051"] Mar 18 15:30:58 crc kubenswrapper[4857]: E0318 15:30:58.782289 4857 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.815309 4857 generic.go:334] "Generic (PLEG): container finished" podID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerID="651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f" exitCode=0 Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.815489 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerDied","Data":"651e94ecb9952f04768b1aff9e314b75e94de588f4a7405da49272e14f564c3f"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.851614 4857 generic.go:334] "Generic (PLEG): container finished" podID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerID="7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275" exitCode=0 Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.851695 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerDied","Data":"7b1aa28a062650980441f29d94dbd781e1fe661e925597dbfe4c38e1604cf275"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.857446 4857 generic.go:334] "Generic (PLEG): container finished" podID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerID="408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb" exitCode=0 Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.857594 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerDied","Data":"408a9df841d5137b471964835bf3567e6f86ae727ab0293fb22fb3dd4f0016eb"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.911290 4857 generic.go:334] "Generic (PLEG): container finished" podID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerID="6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c" exitCode=0 Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.911410 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerDied","Data":"6881c3e04b7f05f46924eca0e6a27d8e391fe546422f51d300cf87bc3611cb6c"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.934238 4857 generic.go:334] "Generic (PLEG): container finished" podID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerID="fc1203b3a729cafe8010b8e3d66f285038ac11e2ffbb80649c81c48d7c75d1c6" exitCode=0 Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.937593 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" event={"ID":"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c","Type":"ContainerDied","Data":"fc1203b3a729cafe8010b8e3d66f285038ac11e2ffbb80649c81c48d7c75d1c6"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.937640 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" event={"ID":"b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c","Type":"ContainerStarted","Data":"982e93b4c6f01bc077b9ffe642008c215f8cc3f84c48da17a37fedea2790e72a"} Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.939522 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.945142 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.945216 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.950635 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.950686 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.950806 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.950920 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.952391 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 15:30:58 crc kubenswrapper[4857]: I0318 15:30:58.952438 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.129594 4857 trace.go:236] Trace[343647230]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Mar-2026 15:30:55.134) (total time: 3964ms): Mar 18 15:30:59 crc kubenswrapper[4857]: Trace[343647230]: [3.964379214s] [3.964379214s] END Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.218158 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-9c6b6d984-xjvbj" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.240001 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.324590 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.324689 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.406193 4857 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-xtz2z" podUID="30a9ec00-16b4-4349-a2c6-a2e6397e0ce0" containerName="frr" probeResult="failure" output="HTTP probe failed with statuscode: 404" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.521230 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-ff66c4dc9-82dsb" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.783876 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": dial tcp 10.217.0.121:8081: connect: connection refused" Mar 18 15:30:59 crc kubenswrapper[4857]: I0318 15:30:59.825747 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-6dcbdf8bb8-jp89f" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.002190 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9sl8" event={"ID":"cb7efbe1-5cfd-4ddb-a334-fae43107aafd","Type":"ContainerStarted","Data":"ccce8a40e3802ce9ec69906dbd747ea0b89fdb6cf0c592675c6802e5c07d0893"} Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.018600 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl78l" event={"ID":"155a767b-458f-42b5-86f8-f73f4d585ee0","Type":"ContainerStarted","Data":"7f183353a2fbff7b13edda5e003e1a48ccba8a52ba2eb6c5f2d58915295fa8f2"} Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.028187 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" event={"ID":"18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5","Type":"ContainerStarted","Data":"00dc034247c7eed7b680fafb0b4a3bcd72c856f47d6ea1fa42b244f7405c0bf9"} Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.029392 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.061629 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" event={"ID":"264f3d7a-0c38-4d0a-9ff7-4f3a24164f59","Type":"ContainerStarted","Data":"1d24437ddfc94e6c95fd53c2f38ad76d6c2f9e722622c2db8b6418fab35b6368"} Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.062175 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.062323 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.066304 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.066375 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069542 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069581 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069588 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069618 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069635 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.069658 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.313333 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.313912 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.361486 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.361737 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.361572 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:31:00 crc kubenswrapper[4857]: I0318 15:31:00.362194 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.007615 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" containerID="cri-o://6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af" gracePeriod=22 Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.065457 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" containerID="cri-o://dafc1fcd5799591aa908ce0bf0bc189cc3f522c9960cc3e0575755e1b1b634e6" gracePeriod=21 Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.078775 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7qbr" event={"ID":"bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900","Type":"ContainerStarted","Data":"a8bea4f7deb5cb687477a1e36be506ffa47dcdd720bd0d99522db5a1a00e225c"} Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.089438 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-89qls" event={"ID":"2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc","Type":"ContainerStarted","Data":"0b10d3f558192aeca00e0e710ea6743c599e4e063d41adf1258cf55bd17863e5"} Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.089521 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.092810 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.092869 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.094213 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.094248 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.118342 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.204157 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.204220 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.204249 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.204291 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.314547 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.314626 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:31:01 crc kubenswrapper[4857]: E0318 15:31:01.351108 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 18 15:31:01 crc kubenswrapper[4857]: E0318 15:31:01.353158 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 18 15:31:01 crc kubenswrapper[4857]: E0318 15:31:01.355015 4857 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 18 15:31:01 crc kubenswrapper[4857]: E0318 15:31:01.355079 4857 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerName="galera" Mar 18 15:31:01 crc kubenswrapper[4857]: I0318 15:31:01.616490 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-6c9d87fc97-ddtxj" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.132229 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" event={"ID":"6f407cca-3a72-46e6-bb51-fdb911d22ea2","Type":"ContainerDied","Data":"90659e4539a1b4c8bdd06fe3f4b0020a15e7d3164f47aad72877f138308c918b"} Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.133078 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90659e4539a1b4c8bdd06fe3f4b0020a15e7d3164f47aad72877f138308c918b" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.134713 4857 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-5mw69 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" start-of-body= Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.134831 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" podUID="264f3d7a-0c38-4d0a-9ff7-4f3a24164f59" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.22:8081/healthz\": dial tcp 10.217.0.22:8081: connect: connection refused" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.228027 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.293192 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" podUID="fdc9df02-49d3-4a40-ba9c-d6ef085abb04" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.317989 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume\") pod \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.318092 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf9h\" (UniqueName: \"kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h\") pod \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.318244 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume\") pod \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\" (UID: \"6f407cca-3a72-46e6-bb51-fdb911d22ea2\") " Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.322374 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.322571 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.323969 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume" (OuterVolumeSpecName: "config-volume") pod "6f407cca-3a72-46e6-bb51-fdb911d22ea2" (UID: "6f407cca-3a72-46e6-bb51-fdb911d22ea2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.346932 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6f407cca-3a72-46e6-bb51-fdb911d22ea2" (UID: "6f407cca-3a72-46e6-bb51-fdb911d22ea2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.362181 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h" (OuterVolumeSpecName: "kube-api-access-qvf9h") pod "6f407cca-3a72-46e6-bb51-fdb911d22ea2" (UID: "6f407cca-3a72-46e6-bb51-fdb911d22ea2"). InnerVolumeSpecName "kube-api-access-qvf9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.366344 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gwqfj" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.367565 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.367611 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.368098 4857 patch_prober.go:28] interesting pod/console-operator-58897d9998-k6kp8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.368130 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" podUID="2e10ef1d-7c47-45d3-b16d-1ac7adccadbd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.422311 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f407cca-3a72-46e6-bb51-fdb911d22ea2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.422345 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf9h\" (UniqueName: \"kubernetes.io/projected/6f407cca-3a72-46e6-bb51-fdb911d22ea2-kube-api-access-qvf9h\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.422356 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f407cca-3a72-46e6-bb51-fdb911d22ea2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.505390 4857 patch_prober.go:28] interesting pod/controller-manager-c867bfcc4-nc2bq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Mar 18 15:31:02 crc kubenswrapper[4857]: I0318 15:31:02.505773 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" podUID="3d4741b7-1f3f-405d-b675-d0141044421a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.102585 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-ltg7d" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.152124 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55fbd9db57-wcht9" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.168904 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564130-rphr4" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.174013 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.351239 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.359161 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.359232 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.359731 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.359771 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.359803 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.361924 4857 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-m2v2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.361984 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.364700 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.364799 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" podUID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerName="openshift-config-operator" containerID="cri-o://25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4" gracePeriod=30 Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.390206 4857 patch_prober.go:28] interesting pod/router-default-5444994796-xwln7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 15:31:03 crc kubenswrapper[4857]: [+]has-synced ok Mar 18 15:31:03 crc kubenswrapper[4857]: [+]process-running ok Mar 18 15:31:03 crc kubenswrapper[4857]: healthz check failed Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.390655 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xwln7" podUID="188cb24d-b3cf-46dd-8a07-12afe6ea75e0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.419393 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.419542 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.420573 4857 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85tjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.420676 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" podUID="8ad51d9d-dcd1-467e-9aa6-162d19c035ed" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.438835 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.438925 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.439067 4857 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-frk6c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.505663 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" podUID="3387b870-2054-4e0f-97b6-4af4f37bf34d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.668466 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg"] Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.708504 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564085-knzbg"] Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.800882 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-kddxh" Mar 18 15:31:03 crc kubenswrapper[4857]: I0318 15:31:03.845120 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-v6rv8" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.173907 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-8b4ps" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.386595 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.386687 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-nqn4p" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.406026 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.430334 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7"} Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.465682 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wd764" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.467253 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xtz2z" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.488369 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-m2v2c_3cc72860-8bb3-4d9b-af72-7f2b1a270d30/openshift-config-operator/1.log" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.497629 4857 generic.go:334] "Generic (PLEG): container finished" podID="3cc72860-8bb3-4d9b-af72-7f2b1a270d30" containerID="25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4" exitCode=2 Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.500465 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerDied","Data":"25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4"} Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.500571 4857 scope.go:117] "RemoveContainer" containerID="aeda4d16d67b2ed8a029af211815ea7cdd31defa27db89dc38e9a1ce2f91afc8" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.513340 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xwln7" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.598742 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5b79d7bc79-hmbhp" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.630332 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-fjnbb" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.723284 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.727861 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.917035 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.917107 4857 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vc2t4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" start-of-body= Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.917112 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 18 15:31:04 crc kubenswrapper[4857]: I0318 15:31:04.917181 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" podUID="b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.71:8080/healthz\": dial tcp 10.217.0.71:8080: connect: connection refused" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.054106 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.233648 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902750ed-a1ec-4bc5-a25b-de87bab4b407" path="/var/lib/kubelet/pods/902750ed-a1ec-4bc5-a25b-de87bab4b407/volumes" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.298957 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f2dbb697-87e8-4c7f-bf29-a918e84fd78e" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.529195 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-m2v2c_3cc72860-8bb3-4d9b-af72-7f2b1a270d30/openshift-config-operator/1.log" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.530210 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" event={"ID":"3cc72860-8bb3-4d9b-af72-7f2b1a270d30","Type":"ContainerStarted","Data":"cc1a7bf71a8d8a1f79f5033641c9bd7fe04697977df4a24e5f780a44bdf9ee2f"} Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.607930 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8cxcs" Mar 18 15:31:05 crc kubenswrapper[4857]: I0318 15:31:05.860563 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 15:31:06 crc kubenswrapper[4857]: I0318 15:31:06.082360 4857 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" containerID="cri-o://25778201c1832dfc0498778c13be064a5034c349f2156e4d6a8c893b594279e4" Mar 18 15:31:06 crc kubenswrapper[4857]: I0318 15:31:06.082390 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:31:06 crc kubenswrapper[4857]: I0318 15:31:06.359770 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:31:06 crc kubenswrapper[4857]: I0318 15:31:06.361271 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f7f765496-hksv2" Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.197518 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f2dbb697-87e8-4c7f-bf29-a918e84fd78e" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.482100 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.484487 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.670072 4857 generic.go:334] "Generic (PLEG): container finished" podID="f695aad9-3bb2-4529-bb2b-5c36787464c1" containerID="6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af" exitCode=0 Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.672980 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerDied","Data":"6bb63ecb774e6370523a28b1acab04b22672c3fa707cf8e6b73bd3d4f66321af"} Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.678742 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-m2v2c" Mar 18 15:31:07 crc kubenswrapper[4857]: I0318 15:31:07.783115 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pm2jd" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.484992 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.487077 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.520568 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.520626 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.761361 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.762407 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.792664 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:08 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:08 crc kubenswrapper[4857]: > Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.813577 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f695aad9-3bb2-4529-bb2b-5c36787464c1","Type":"ContainerStarted","Data":"735cd3d551df847d652a539f4921a79b4d3f8e7d1fb31417ab4bf51c09bb409e"} Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.818726 4857 generic.go:334] "Generic (PLEG): container finished" podID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerID="dafc1fcd5799591aa908ce0bf0bc189cc3f522c9960cc3e0575755e1b1b634e6" exitCode=0 Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.820575 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerDied","Data":"dafc1fcd5799591aa908ce0bf0bc189cc3f522c9960cc3e0575755e1b1b634e6"} Mar 18 15:31:08 crc kubenswrapper[4857]: I0318 15:31:08.820620 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f76ea184-35e0-4df6-8c6e-34196ccd7901","Type":"ContainerStarted","Data":"b2c34af1ed5f7d268df522e4edbb83eb8e3f26ba7da8a731252d1d206c69c91b"} Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.596592 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:09 crc kubenswrapper[4857]: > Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.607069 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:09 crc kubenswrapper[4857]: > Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.752406 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.833782 4857 generic.go:334] "Generic (PLEG): container finished" podID="1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" containerID="3056c6d80f0412acc9e13233ec8ba0e3a011b9f4bc53d7744e986f37b7a49a10" exitCode=0 Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.835403 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564130-g4dps" event={"ID":"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51","Type":"ContainerDied","Data":"3056c6d80f0412acc9e13233ec8ba0e3a011b9f4bc53d7744e986f37b7a49a10"} Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.875950 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:09 crc kubenswrapper[4857]: > Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.974451 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="f2dbb697-87e8-4c7f-bf29-a918e84fd78e" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.974956 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.976190 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"67cfe9bf10be5a5b13f18837f3f54f093ec954f8f0e86cea4c52dd855659f5b8"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 18 15:31:09 crc kubenswrapper[4857]: I0318 15:31:09.976261 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f2dbb697-87e8-4c7f-bf29-a918e84fd78e" containerName="cinder-scheduler" containerID="cri-o://67cfe9bf10be5a5b13f18837f3f54f093ec954f8f0e86cea4c52dd855659f5b8" gracePeriod=30 Mar 18 15:31:10 crc kubenswrapper[4857]: I0318 15:31:10.451722 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 18 15:31:10 crc kubenswrapper[4857]: I0318 15:31:10.452121 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 18 15:31:11 crc kubenswrapper[4857]: I0318 15:31:11.201625 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-5mw69" Mar 18 15:31:11 crc kubenswrapper[4857]: I0318 15:31:11.349966 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 18 15:31:11 crc kubenswrapper[4857]: I0318 15:31:11.350295 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 18 15:31:11 crc kubenswrapper[4857]: I0318 15:31:11.890072 4857 generic.go:334] "Generic (PLEG): container finished" podID="18946755-ed18-4d4a-bd99-7bb08f42c91b" containerID="8f4bb54650f0b81b23061d4d2f9b15448fd75788662c0ca363aaef53c2d76b4e" exitCode=1 Mar 18 15:31:11 crc kubenswrapper[4857]: I0318 15:31:11.890784 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18946755-ed18-4d4a-bd99-7bb08f42c91b","Type":"ContainerDied","Data":"8f4bb54650f0b81b23061d4d2f9b15448fd75788662c0ca363aaef53c2d76b4e"} Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.157644 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.235999 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8r7b\" (UniqueName: \"kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b\") pod \"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51\" (UID: \"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51\") " Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.247290 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b" (OuterVolumeSpecName: "kube-api-access-f8r7b") pod "1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" (UID: "1b2c4b59-9fc5-4ec9-9189-60cb1e716f51"). InnerVolumeSpecName "kube-api-access-f8r7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.321322 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5847fcc4fb-mg28t" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.344089 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8r7b\" (UniqueName: \"kubernetes.io/projected/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51-kube-api-access-f8r7b\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.379139 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-k6kp8" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.515963 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c867bfcc4-nc2bq" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.905331 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564130-g4dps" event={"ID":"1b2c4b59-9fc5-4ec9-9189-60cb1e716f51","Type":"ContainerDied","Data":"c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423"} Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.905356 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564130-g4dps" Mar 18 15:31:12 crc kubenswrapper[4857]: I0318 15:31:12.905385 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1134a608abd5fd8287f4325f34eab69532d26a8a123e5f1048aeceeb33bc423" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.329017 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564124-dvwdx"] Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.341345 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564124-dvwdx"] Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.431053 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85tjg" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.435403 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:31:13 crc kubenswrapper[4857]: E0318 15:31:13.438678 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" containerName="oc" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.438716 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" containerName="oc" Mar 18 15:31:13 crc kubenswrapper[4857]: E0318 15:31:13.438837 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f407cca-3a72-46e6-bb51-fdb911d22ea2" containerName="collect-profiles" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.438851 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f407cca-3a72-46e6-bb51-fdb911d22ea2" containerName="collect-profiles" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.439354 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f407cca-3a72-46e6-bb51-fdb911d22ea2" containerName="collect-profiles" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.439388 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" containerName="oc" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.447322 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.494345 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.504133 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-frk6c" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.593883 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.594035 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrdx\" (UniqueName: \"kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.601471 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.665089 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.726094 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" containerID="cri-o://2b8446d8d8d3e8191e29a2bcf3fca537abec08ed645b1d0fafab48986027acaf" gracePeriod=14 Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.729476 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrdx\" (UniqueName: \"kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.730266 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.730479 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.734868 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.735609 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.795279 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrdx\" (UniqueName: \"kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx\") pod \"redhat-operators-r2m9n\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.885342 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.981615 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18946755-ed18-4d4a-bd99-7bb08f42c91b","Type":"ContainerDied","Data":"66fefa4133aee0cc7836f6878165dc4aab4883c4bbd5969c4f16279dcde07922"} Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.981957 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66fefa4133aee0cc7836f6878165dc4aab4883c4bbd5969c4f16279dcde07922" Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.993918 4857 generic.go:334] "Generic (PLEG): container finished" podID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerID="2b8446d8d8d3e8191e29a2bcf3fca537abec08ed645b1d0fafab48986027acaf" exitCode=0 Mar 18 15:31:13 crc kubenswrapper[4857]: I0318 15:31:13.995826 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" event={"ID":"8c2aa0cb-1b55-4425-ac30-0369de76a057","Type":"ContainerDied","Data":"2b8446d8d8d3e8191e29a2bcf3fca537abec08ed645b1d0fafab48986027acaf"} Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.096034 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.098538 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.271966 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272101 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272200 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272295 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272408 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272477 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272614 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272654 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.272691 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbxzt\" (UniqueName: \"kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt\") pod \"18946755-ed18-4d4a-bd99-7bb08f42c91b\" (UID: \"18946755-ed18-4d4a-bd99-7bb08f42c91b\") " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.275497 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.287019 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data" (OuterVolumeSpecName: "config-data") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.293027 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.305063 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.313695 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.326445 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt" (OuterVolumeSpecName: "kube-api-access-tbxzt") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "kube-api-access-tbxzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.351352 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.373653 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.376728 4857 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.376779 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.376790 4857 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.376799 4857 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18946755-ed18-4d4a-bd99-7bb08f42c91b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.384008 4857 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.384068 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbxzt\" (UniqueName: \"kubernetes.io/projected/18946755-ed18-4d4a-bd99-7bb08f42c91b-kube-api-access-tbxzt\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.384089 4857 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-config-data\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.388594 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.397910 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "18946755-ed18-4d4a-bd99-7bb08f42c91b" (UID: "18946755-ed18-4d4a-bd99-7bb08f42c91b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.435460 4857 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.488011 4857 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18946755-ed18-4d4a-bd99-7bb08f42c91b-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.488050 4857 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.488062 4857 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18946755-ed18-4d4a-bd99-7bb08f42c91b-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.717126 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 18 15:31:14 crc kubenswrapper[4857]: I0318 15:31:14.891361 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vc2t4" Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.011576 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.015064 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" event={"ID":"8c2aa0cb-1b55-4425-ac30-0369de76a057","Type":"ContainerStarted","Data":"94215bdd547b3cfd00780245787e228a8c46a0250affe7a0891d635847a10c59"} Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.016663 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.016731 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.016885 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.043881 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.227233 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe07019-bd4e-4887-970c-a02fdc932f25" path="/var/lib/kubelet/pods/ebe07019-bd4e-4887-970c-a02fdc932f25/volumes" Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.837525 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Mar 18 15:31:15 crc kubenswrapper[4857]: I0318 15:31:15.837587 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.024974 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerStarted","Data":"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01"} Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.025055 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerStarted","Data":"9faf05e0dda8b35254fee58b0ecbcf7a498c4d49a2b84641d6cb354f94b62aa8"} Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.025967 4857 patch_prober.go:28] interesting pod/oauth-openshift-f79475d48-ncfgv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.026096 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" podUID="8c2aa0cb-1b55-4425-ac30-0369de76a057" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.568612 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-bl8th container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.568942 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-fc6d448bf-bl8th" podUID="9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:31:16 crc kubenswrapper[4857]: I0318 15:31:16.573033 4857 patch_prober.go:28] interesting pod/logging-loki-gateway-fc6d448bf-w5jpj container/opa namespace/openshift-logging: Readiness probe status=failure output="" start-of-body= Mar 18 15:31:17 crc kubenswrapper[4857]: I0318 15:31:17.042052 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerID="3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01" exitCode=0 Mar 18 15:31:17 crc kubenswrapper[4857]: I0318 15:31:17.042113 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerDied","Data":"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01"} Mar 18 15:31:17 crc kubenswrapper[4857]: I0318 15:31:17.060019 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.554533 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:18 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:18 crc kubenswrapper[4857]: > Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.682066 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 18 15:31:18 crc kubenswrapper[4857]: E0318 15:31:18.682993 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18946755-ed18-4d4a-bd99-7bb08f42c91b" containerName="tempest-tests-tempest-tests-runner" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.683010 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="18946755-ed18-4d4a-bd99-7bb08f42c91b" containerName="tempest-tests-tempest-tests-runner" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.683277 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="18946755-ed18-4d4a-bd99-7bb08f42c91b" containerName="tempest-tests-tempest-tests-runner" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.684967 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.688986 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xwmbd" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.696985 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.716148 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.716274 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzx8s\" (UniqueName: \"kubernetes.io/projected/bad50738-6a0f-49a2-abd9-4ebd71bc9056-kube-api-access-mzx8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.818893 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzx8s\" (UniqueName: \"kubernetes.io/projected/bad50738-6a0f-49a2-abd9-4ebd71bc9056-kube-api-access-mzx8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.819438 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.821001 4857 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.842534 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzx8s\" (UniqueName: \"kubernetes.io/projected/bad50738-6a0f-49a2-abd9-4ebd71bc9056-kube-api-access-mzx8s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:18 crc kubenswrapper[4857]: I0318 15:31:18.865654 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bad50738-6a0f-49a2-abd9-4ebd71bc9056\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.038850 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.076685 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerStarted","Data":"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da"} Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.550517 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:19 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:19 crc kubenswrapper[4857]: > Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.601403 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:19 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:19 crc kubenswrapper[4857]: > Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.820880 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:19 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:19 crc kubenswrapper[4857]: > Mar 18 15:31:19 crc kubenswrapper[4857]: I0318 15:31:19.893781 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 18 15:31:20 crc kubenswrapper[4857]: I0318 15:31:20.097141 4857 generic.go:334] "Generic (PLEG): container finished" podID="f2dbb697-87e8-4c7f-bf29-a918e84fd78e" containerID="67cfe9bf10be5a5b13f18837f3f54f093ec954f8f0e86cea4c52dd855659f5b8" exitCode=0 Mar 18 15:31:20 crc kubenswrapper[4857]: I0318 15:31:20.097239 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2dbb697-87e8-4c7f-bf29-a918e84fd78e","Type":"ContainerDied","Data":"67cfe9bf10be5a5b13f18837f3f54f093ec954f8f0e86cea4c52dd855659f5b8"} Mar 18 15:31:20 crc kubenswrapper[4857]: I0318 15:31:20.098845 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bad50738-6a0f-49a2-abd9-4ebd71bc9056","Type":"ContainerStarted","Data":"20b0a4920ed31b019d0cd9a6b840e1ca61a3f97bb32cf4b3af1c572707b4c45f"} Mar 18 15:31:21 crc kubenswrapper[4857]: I0318 15:31:21.773660 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:31:21 crc kubenswrapper[4857]: I0318 15:31:21.774320 4857 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="f76ea184-35e0-4df6-8c6e-34196ccd7901" containerName="galera" probeResult="failure" output="command timed out" Mar 18 15:31:23 crc kubenswrapper[4857]: I0318 15:31:23.154729 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2dbb697-87e8-4c7f-bf29-a918e84fd78e","Type":"ContainerStarted","Data":"4ff1c2afe2ffd7db456b1d69b4906567e70053fff4ee782dc2b77a38de3b2f99"} Mar 18 15:31:23 crc kubenswrapper[4857]: I0318 15:31:23.157811 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bad50738-6a0f-49a2-abd9-4ebd71bc9056","Type":"ContainerStarted","Data":"3d37b2cfa5e2b68d90ef85d88bca8037f94a11c5c6687fe3e262d527e54fdc8e"} Mar 18 15:31:23 crc kubenswrapper[4857]: I0318 15:31:23.248338 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.668087761 podStartE2EDuration="5.247604559s" podCreationTimestamp="2026-03-18 15:31:18 +0000 UTC" firstStartedPulling="2026-03-18 15:31:19.907478011 +0000 UTC m=+5464.036606468" lastFinishedPulling="2026-03-18 15:31:22.486994809 +0000 UTC m=+5466.616123266" observedRunningTime="2026-03-18 15:31:23.206796531 +0000 UTC m=+5467.335924998" watchObservedRunningTime="2026-03-18 15:31:23.247604559 +0000 UTC m=+5467.376733016" Mar 18 15:31:23 crc kubenswrapper[4857]: I0318 15:31:23.675691 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gh9dk" Mar 18 15:31:25 crc kubenswrapper[4857]: I0318 15:31:25.843467 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f79475d48-ncfgv" Mar 18 15:31:26 crc kubenswrapper[4857]: I0318 15:31:26.901233 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 18 15:31:28 crc kubenswrapper[4857]: I0318 15:31:28.642675 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-89qls" podUID="2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:28 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:28 crc kubenswrapper[4857]: > Mar 18 15:31:29 crc kubenswrapper[4857]: I0318 15:31:29.236184 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerID="c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da" exitCode=0 Mar 18 15:31:29 crc kubenswrapper[4857]: I0318 15:31:29.236239 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerDied","Data":"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da"} Mar 18 15:31:29 crc kubenswrapper[4857]: I0318 15:31:29.558298 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f9sl8" podUID="cb7efbe1-5cfd-4ddb-a334-fae43107aafd" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:29 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:29 crc kubenswrapper[4857]: > Mar 18 15:31:29 crc kubenswrapper[4857]: I0318 15:31:29.573763 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:29 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:29 crc kubenswrapper[4857]: > Mar 18 15:31:29 crc kubenswrapper[4857]: I0318 15:31:29.823876 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zl78l" podUID="155a767b-458f-42b5-86f8-f73f4d585ee0" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:29 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:29 crc kubenswrapper[4857]: > Mar 18 15:31:30 crc kubenswrapper[4857]: I0318 15:31:30.250140 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerStarted","Data":"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8"} Mar 18 15:31:30 crc kubenswrapper[4857]: I0318 15:31:30.270624 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r2m9n" podStartSLOduration=4.620109308 podStartE2EDuration="17.270602755s" podCreationTimestamp="2026-03-18 15:31:13 +0000 UTC" firstStartedPulling="2026-03-18 15:31:17.046222663 +0000 UTC m=+5461.175351120" lastFinishedPulling="2026-03-18 15:31:29.69671611 +0000 UTC m=+5473.825844567" observedRunningTime="2026-03-18 15:31:30.266483051 +0000 UTC m=+5474.395611508" watchObservedRunningTime="2026-03-18 15:31:30.270602755 +0000 UTC m=+5474.399731212" Mar 18 15:31:31 crc kubenswrapper[4857]: I0318 15:31:31.938323 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 18 15:31:32 crc kubenswrapper[4857]: I0318 15:31:32.740396 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7889654c4-2jp9b" Mar 18 15:31:34 crc kubenswrapper[4857]: I0318 15:31:34.097601 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:34 crc kubenswrapper[4857]: I0318 15:31:34.098246 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:31:35 crc kubenswrapper[4857]: I0318 15:31:35.925894 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:35 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:35 crc kubenswrapper[4857]: > Mar 18 15:31:37 crc kubenswrapper[4857]: I0318 15:31:37.560426 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:31:37 crc kubenswrapper[4857]: I0318 15:31:37.624324 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-89qls" Mar 18 15:31:38 crc kubenswrapper[4857]: I0318 15:31:38.615034 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:31:38 crc kubenswrapper[4857]: I0318 15:31:38.708274 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9sl8" Mar 18 15:31:38 crc kubenswrapper[4857]: I0318 15:31:38.820098 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:31:38 crc kubenswrapper[4857]: I0318 15:31:38.886264 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl78l" Mar 18 15:31:39 crc kubenswrapper[4857]: I0318 15:31:39.679121 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:39 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:39 crc kubenswrapper[4857]: > Mar 18 15:31:44 crc kubenswrapper[4857]: I0318 15:31:44.661118 4857 scope.go:117] "RemoveContainer" containerID="093a0923789414a87bb74a2fe4c822274b7ff8491ef259d8e02d451a308a65c5" Mar 18 15:31:44 crc kubenswrapper[4857]: I0318 15:31:44.771992 4857 scope.go:117] "RemoveContainer" containerID="29bbbae86d50334373071fe5c9c5865d9d59a37fa75d8574e2e48bb1feac8399" Mar 18 15:31:45 crc kubenswrapper[4857]: I0318 15:31:45.150250 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:45 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:45 crc kubenswrapper[4857]: > Mar 18 15:31:49 crc kubenswrapper[4857]: I0318 15:31:49.511088 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 18 15:31:49 crc kubenswrapper[4857]: I0318 15:31:49.595998 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:49 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:49 crc kubenswrapper[4857]: > Mar 18 15:31:55 crc kubenswrapper[4857]: I0318 15:31:55.180036 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:55 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:55 crc kubenswrapper[4857]: > Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.224840 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2s484/must-gather-h8gst"] Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.227439 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.233192 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.233512 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9fw\" (UniqueName: \"kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.241836 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2s484"/"openshift-service-ca.crt" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.241887 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2s484"/"kube-root-ca.crt" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.247917 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2s484/must-gather-h8gst"] Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.339655 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.339817 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z9fw\" (UniqueName: \"kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.340995 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.397802 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z9fw\" (UniqueName: \"kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw\") pod \"must-gather-h8gst\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.573736 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.619039 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.623254 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.646577 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.669723 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96vk\" (UniqueName: \"kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.669830 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.670304 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.775566 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.775997 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96vk\" (UniqueName: \"kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.776029 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.776598 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.776873 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:58 crc kubenswrapper[4857]: I0318 15:31:58.802119 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96vk\" (UniqueName: \"kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk\") pod \"certified-operators-ccrhf\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:59 crc kubenswrapper[4857]: I0318 15:31:59.024320 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:31:59 crc kubenswrapper[4857]: I0318 15:31:59.633997 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:31:59 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:31:59 crc kubenswrapper[4857]: > Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.066801 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2s484/must-gather-h8gst"] Mar 18 15:32:00 crc kubenswrapper[4857]: W0318 15:32:00.075884 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda85e0a42_0de7_4c3e_959f_3b16528da79c.slice/crio-15ee4df0659bf395b8e2838bb6daf46338468f94627b7bbeb4b1cac0419ae524 WatchSource:0}: Error finding container 15ee4df0659bf395b8e2838bb6daf46338468f94627b7bbeb4b1cac0419ae524: Status 404 returned error can't find the container with id 15ee4df0659bf395b8e2838bb6daf46338468f94627b7bbeb4b1cac0419ae524 Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.087599 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.207196 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564132-4fgzl"] Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.209083 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.211993 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.212039 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.221224 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.224866 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hv25\" (UniqueName: \"kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25\") pod \"auto-csr-approver-29564132-4fgzl\" (UID: \"2a7f4d26-eaa4-4d54-8a3d-b912b9484318\") " pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.230571 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564132-4fgzl"] Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.326692 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hv25\" (UniqueName: \"kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25\") pod \"auto-csr-approver-29564132-4fgzl\" (UID: \"2a7f4d26-eaa4-4d54-8a3d-b912b9484318\") " pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.379634 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hv25\" (UniqueName: \"kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25\") pod \"auto-csr-approver-29564132-4fgzl\" (UID: \"2a7f4d26-eaa4-4d54-8a3d-b912b9484318\") " pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.612178 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.756675 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/must-gather-h8gst" event={"ID":"a85e0a42-0de7-4c3e-959f-3b16528da79c","Type":"ContainerStarted","Data":"15ee4df0659bf395b8e2838bb6daf46338468f94627b7bbeb4b1cac0419ae524"} Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.763932 4857 generic.go:334] "Generic (PLEG): container finished" podID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerID="fb87bed1dbf65310bfec3667bb8ba5f6f1603f4041855f77bc06115f9bcb09ef" exitCode=0 Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.763978 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerDied","Data":"fb87bed1dbf65310bfec3667bb8ba5f6f1603f4041855f77bc06115f9bcb09ef"} Mar 18 15:32:00 crc kubenswrapper[4857]: I0318 15:32:00.764003 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerStarted","Data":"984cc5733d3843974781d97d58a00bbf56cb77f947b7a65fe85c96256429231c"} Mar 18 15:32:01 crc kubenswrapper[4857]: I0318 15:32:01.183053 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564132-4fgzl"] Mar 18 15:32:01 crc kubenswrapper[4857]: W0318 15:32:01.187465 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a7f4d26_eaa4_4d54_8a3d_b912b9484318.slice/crio-c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71 WatchSource:0}: Error finding container c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71: Status 404 returned error can't find the container with id c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71 Mar 18 15:32:01 crc kubenswrapper[4857]: I0318 15:32:01.809199 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" event={"ID":"2a7f4d26-eaa4-4d54-8a3d-b912b9484318","Type":"ContainerStarted","Data":"c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71"} Mar 18 15:32:02 crc kubenswrapper[4857]: I0318 15:32:02.828930 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerStarted","Data":"3a3665513705a124ea0336fb353c3e5fdf01aa3c2e353330de8f5d06002ce455"} Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.582231 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.586382 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.594165 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.678508 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.678728 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rp7\" (UniqueName: \"kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.678956 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.782260 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.782444 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rp7\" (UniqueName: \"kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.782496 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.782944 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.783222 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.811632 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rp7\" (UniqueName: \"kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7\") pod \"redhat-marketplace-swdws\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.858628 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" event={"ID":"2a7f4d26-eaa4-4d54-8a3d-b912b9484318","Type":"ContainerStarted","Data":"37bca6d3856c622b84ebe7e2ca4defaeac3b5df10687e315988baf50b8427dac"} Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.888211 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" podStartSLOduration=3.75018386 podStartE2EDuration="4.88803494s" podCreationTimestamp="2026-03-18 15:32:00 +0000 UTC" firstStartedPulling="2026-03-18 15:32:01.192236388 +0000 UTC m=+5505.321364845" lastFinishedPulling="2026-03-18 15:32:02.330087468 +0000 UTC m=+5506.459215925" observedRunningTime="2026-03-18 15:32:04.875238688 +0000 UTC m=+5509.004367145" watchObservedRunningTime="2026-03-18 15:32:04.88803494 +0000 UTC m=+5509.017163397" Mar 18 15:32:04 crc kubenswrapper[4857]: I0318 15:32:04.937628 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:05 crc kubenswrapper[4857]: I0318 15:32:05.167463 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:05 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:05 crc kubenswrapper[4857]: > Mar 18 15:32:05 crc kubenswrapper[4857]: I0318 15:32:05.888434 4857 generic.go:334] "Generic (PLEG): container finished" podID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerID="3a3665513705a124ea0336fb353c3e5fdf01aa3c2e353330de8f5d06002ce455" exitCode=0 Mar 18 15:32:05 crc kubenswrapper[4857]: I0318 15:32:05.888545 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerDied","Data":"3a3665513705a124ea0336fb353c3e5fdf01aa3c2e353330de8f5d06002ce455"} Mar 18 15:32:07 crc kubenswrapper[4857]: I0318 15:32:07.929674 4857 generic.go:334] "Generic (PLEG): container finished" podID="2a7f4d26-eaa4-4d54-8a3d-b912b9484318" containerID="37bca6d3856c622b84ebe7e2ca4defaeac3b5df10687e315988baf50b8427dac" exitCode=0 Mar 18 15:32:07 crc kubenswrapper[4857]: I0318 15:32:07.930278 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" event={"ID":"2a7f4d26-eaa4-4d54-8a3d-b912b9484318","Type":"ContainerDied","Data":"37bca6d3856c622b84ebe7e2ca4defaeac3b5df10687e315988baf50b8427dac"} Mar 18 15:32:09 crc kubenswrapper[4857]: I0318 15:32:09.586292 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:09 crc kubenswrapper[4857]: > Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.178012 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" event={"ID":"2a7f4d26-eaa4-4d54-8a3d-b912b9484318","Type":"ContainerDied","Data":"c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71"} Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.181441 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c69042f68723b971b5e001a2816cb014b41d1178eb073f2d08b958a6b4688d71" Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.230317 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.258538 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hv25\" (UniqueName: \"kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25\") pod \"2a7f4d26-eaa4-4d54-8a3d-b912b9484318\" (UID: \"2a7f4d26-eaa4-4d54-8a3d-b912b9484318\") " Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.301102 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25" (OuterVolumeSpecName: "kube-api-access-7hv25") pod "2a7f4d26-eaa4-4d54-8a3d-b912b9484318" (UID: "2a7f4d26-eaa4-4d54-8a3d-b912b9484318"). InnerVolumeSpecName "kube-api-access-7hv25". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.365258 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hv25\" (UniqueName: \"kubernetes.io/projected/2a7f4d26-eaa4-4d54-8a3d-b912b9484318-kube-api-access-7hv25\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:12 crc kubenswrapper[4857]: I0318 15:32:12.826388 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.403714 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerStarted","Data":"bb70eda0bc045e921ca94bb7571277b744c01993601dc2b51e7276c0b9496fb6"} Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.415901 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerStarted","Data":"02a4a49abe2881ac1fe5260942c00d6ef0fabe17b50812e62ac12ebd58067f18"} Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.430978 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564132-4fgzl" Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.431261 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/must-gather-h8gst" event={"ID":"a85e0a42-0de7-4c3e-959f-3b16528da79c","Type":"ContainerStarted","Data":"5461296be121f426841e9d8e246dc400addb8ff017f52665513b09a9b3199d4d"} Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.473365 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ccrhf" podStartSLOduration=3.927448888 podStartE2EDuration="15.473341223s" podCreationTimestamp="2026-03-18 15:31:58 +0000 UTC" firstStartedPulling="2026-03-18 15:32:00.766007594 +0000 UTC m=+5504.895136051" lastFinishedPulling="2026-03-18 15:32:12.311899929 +0000 UTC m=+5516.441028386" observedRunningTime="2026-03-18 15:32:13.456233542 +0000 UTC m=+5517.585361999" watchObservedRunningTime="2026-03-18 15:32:13.473341223 +0000 UTC m=+5517.602469680" Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.566257 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564126-hh6vc"] Mar 18 15:32:13 crc kubenswrapper[4857]: I0318 15:32:13.587701 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564126-hh6vc"] Mar 18 15:32:14 crc kubenswrapper[4857]: I0318 15:32:14.448033 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/must-gather-h8gst" event={"ID":"a85e0a42-0de7-4c3e-959f-3b16528da79c","Type":"ContainerStarted","Data":"b8cce18dc3defd2b352b99e57e508af0e55fea62ae140f3a0aa913672fe5193e"} Mar 18 15:32:14 crc kubenswrapper[4857]: I0318 15:32:14.450768 4857 generic.go:334] "Generic (PLEG): container finished" podID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerID="f84aef82b7c8cc73a434eb34b20d6e26b79e06122f47937415d32e78b38e6b8e" exitCode=0 Mar 18 15:32:14 crc kubenswrapper[4857]: I0318 15:32:14.450843 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerDied","Data":"f84aef82b7c8cc73a434eb34b20d6e26b79e06122f47937415d32e78b38e6b8e"} Mar 18 15:32:14 crc kubenswrapper[4857]: I0318 15:32:14.482336 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2s484/must-gather-h8gst" podStartSLOduration=4.259647691 podStartE2EDuration="16.482307254s" podCreationTimestamp="2026-03-18 15:31:58 +0000 UTC" firstStartedPulling="2026-03-18 15:32:00.080024714 +0000 UTC m=+5504.209153171" lastFinishedPulling="2026-03-18 15:32:12.302684277 +0000 UTC m=+5516.431812734" observedRunningTime="2026-03-18 15:32:14.477836692 +0000 UTC m=+5518.606965149" watchObservedRunningTime="2026-03-18 15:32:14.482307254 +0000 UTC m=+5518.611435711" Mar 18 15:32:15 crc kubenswrapper[4857]: I0318 15:32:15.158890 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:15 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:15 crc kubenswrapper[4857]: > Mar 18 15:32:15 crc kubenswrapper[4857]: I0318 15:32:15.180595 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d06d68-3eb9-4d85-84fd-cd190b48cb48" path="/var/lib/kubelet/pods/e8d06d68-3eb9-4d85-84fd-cd190b48cb48/volumes" Mar 18 15:32:16 crc kubenswrapper[4857]: I0318 15:32:16.483948 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerStarted","Data":"c1d23a226f0cc065668fce32e8c6e891d65eb208000a996666c02eab3190e17b"} Mar 18 15:32:18 crc kubenswrapper[4857]: I0318 15:32:18.527308 4857 generic.go:334] "Generic (PLEG): container finished" podID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerID="c1d23a226f0cc065668fce32e8c6e891d65eb208000a996666c02eab3190e17b" exitCode=0 Mar 18 15:32:18 crc kubenswrapper[4857]: I0318 15:32:18.528081 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerDied","Data":"c1d23a226f0cc065668fce32e8c6e891d65eb208000a996666c02eab3190e17b"} Mar 18 15:32:19 crc kubenswrapper[4857]: I0318 15:32:19.025287 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:19 crc kubenswrapper[4857]: I0318 15:32:19.025625 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:19 crc kubenswrapper[4857]: I0318 15:32:19.600680 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:19 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:19 crc kubenswrapper[4857]: > Mar 18 15:32:20 crc kubenswrapper[4857]: I0318 15:32:20.086450 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ccrhf" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:20 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:20 crc kubenswrapper[4857]: > Mar 18 15:32:20 crc kubenswrapper[4857]: I0318 15:32:20.560878 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerStarted","Data":"97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f"} Mar 18 15:32:20 crc kubenswrapper[4857]: I0318 15:32:20.592411 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-swdws" podStartSLOduration=11.956816789 podStartE2EDuration="16.592388489s" podCreationTimestamp="2026-03-18 15:32:04 +0000 UTC" firstStartedPulling="2026-03-18 15:32:14.454270178 +0000 UTC m=+5518.583398655" lastFinishedPulling="2026-03-18 15:32:19.089841898 +0000 UTC m=+5523.218970355" observedRunningTime="2026-03-18 15:32:20.584183602 +0000 UTC m=+5524.713312049" watchObservedRunningTime="2026-03-18 15:32:20.592388489 +0000 UTC m=+5524.721516946" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.188385 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2s484/crc-debug-5kgxv"] Mar 18 15:32:22 crc kubenswrapper[4857]: E0318 15:32:22.190000 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7f4d26-eaa4-4d54-8a3d-b912b9484318" containerName="oc" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.190028 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7f4d26-eaa4-4d54-8a3d-b912b9484318" containerName="oc" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.190355 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7f4d26-eaa4-4d54-8a3d-b912b9484318" containerName="oc" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.191613 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.195486 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2s484"/"default-dockercfg-9dtng" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.390125 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.390276 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7mrk\" (UniqueName: \"kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.513886 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.514649 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7mrk\" (UniqueName: \"kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.519826 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.557493 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7mrk\" (UniqueName: \"kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk\") pod \"crc-debug-5kgxv\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:22 crc kubenswrapper[4857]: I0318 15:32:22.811887 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:32:23 crc kubenswrapper[4857]: I0318 15:32:23.616112 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-5kgxv" event={"ID":"37883785-3057-4faf-9dac-97d6b547801b","Type":"ContainerStarted","Data":"ca27e93aafb49459b4afa012a73304a2f6a0c4c83bbdfe13bd4e6acbbae8beac"} Mar 18 15:32:25 crc kubenswrapper[4857]: I0318 15:32:25.253617 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:25 crc kubenswrapper[4857]: I0318 15:32:25.255430 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:25 crc kubenswrapper[4857]: I0318 15:32:25.265566 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:25 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:25 crc kubenswrapper[4857]: > Mar 18 15:32:27 crc kubenswrapper[4857]: I0318 15:32:27.061237 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-swdws" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:27 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:27 crc kubenswrapper[4857]: > Mar 18 15:32:29 crc kubenswrapper[4857]: I0318 15:32:29.931870 4857 trace.go:236] Trace[844965927]: "Calculate volume metrics of storage for pod minio-dev/minio" (18-Mar-2026 15:32:28.722) (total time: 1204ms): Mar 18 15:32:29 crc kubenswrapper[4857]: Trace[844965927]: [1.204697355s] [1.204697355s] END Mar 18 15:32:30 crc kubenswrapper[4857]: I0318 15:32:30.866095 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7qbr" podUID="bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:30 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:30 crc kubenswrapper[4857]: > Mar 18 15:32:31 crc kubenswrapper[4857]: I0318 15:32:31.341401 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ccrhf" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:31 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:31 crc kubenswrapper[4857]: > Mar 18 15:32:35 crc kubenswrapper[4857]: I0318 15:32:35.438714 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:35 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:35 crc kubenswrapper[4857]: > Mar 18 15:32:36 crc kubenswrapper[4857]: I0318 15:32:36.260541 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-swdws" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:36 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:36 crc kubenswrapper[4857]: > Mar 18 15:32:38 crc kubenswrapper[4857]: I0318 15:32:38.588304 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:32:38 crc kubenswrapper[4857]: I0318 15:32:38.676712 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7qbr" Mar 18 15:32:39 crc kubenswrapper[4857]: I0318 15:32:39.086240 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:39 crc kubenswrapper[4857]: I0318 15:32:39.156984 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:39 crc kubenswrapper[4857]: I0318 15:32:39.951350 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:32:40 crc kubenswrapper[4857]: I0318 15:32:40.927204 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ccrhf" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" containerID="cri-o://02a4a49abe2881ac1fe5260942c00d6ef0fabe17b50812e62ac12ebd58067f18" gracePeriod=2 Mar 18 15:32:41 crc kubenswrapper[4857]: I0318 15:32:41.951153 4857 generic.go:334] "Generic (PLEG): container finished" podID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerID="02a4a49abe2881ac1fe5260942c00d6ef0fabe17b50812e62ac12ebd58067f18" exitCode=0 Mar 18 15:32:41 crc kubenswrapper[4857]: I0318 15:32:41.951247 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerDied","Data":"02a4a49abe2881ac1fe5260942c00d6ef0fabe17b50812e62ac12ebd58067f18"} Mar 18 15:32:44 crc kubenswrapper[4857]: I0318 15:32:44.962963 4857 scope.go:117] "RemoveContainer" containerID="00314f1bf09b972d525fa6c771c02605fcd82cd197d20f88cc628e109575f9f0" Mar 18 15:32:45 crc kubenswrapper[4857]: I0318 15:32:45.121103 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:45 crc kubenswrapper[4857]: I0318 15:32:45.183220 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:45 crc kubenswrapper[4857]: I0318 15:32:45.241835 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:45 crc kubenswrapper[4857]: E0318 15:32:45.262392 4857 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Mar 18 15:32:45 crc kubenswrapper[4857]: E0318 15:32:45.264044 4857 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7mrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-5kgxv_openshift-must-gather-2s484(37883785-3057-4faf-9dac-97d6b547801b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 15:32:45 crc kubenswrapper[4857]: E0318 15:32:45.265706 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-2s484/crc-debug-5kgxv" podUID="37883785-3057-4faf-9dac-97d6b547801b" Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.145639 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:46 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:46 crc kubenswrapper[4857]: > Mar 18 15:32:46 crc kubenswrapper[4857]: E0318 15:32:46.152889 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-2s484/crc-debug-5kgxv" podUID="37883785-3057-4faf-9dac-97d6b547801b" Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.880525 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.977730 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content\") pod \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.977810 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p96vk\" (UniqueName: \"kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk\") pod \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.978197 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities\") pod \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\" (UID: \"6539aa64-4253-4ec8-aa9a-0033341b3c9d\") " Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.979465 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities" (OuterVolumeSpecName: "utilities") pod "6539aa64-4253-4ec8-aa9a-0033341b3c9d" (UID: "6539aa64-4253-4ec8-aa9a-0033341b3c9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:32:46 crc kubenswrapper[4857]: I0318 15:32:46.980246 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.050768 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6539aa64-4253-4ec8-aa9a-0033341b3c9d" (UID: "6539aa64-4253-4ec8-aa9a-0033341b3c9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.083501 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6539aa64-4253-4ec8-aa9a-0033341b3c9d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.174121 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ccrhf" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.174299 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-swdws" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" containerID="cri-o://97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f" gracePeriod=2 Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.185583 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ccrhf" event={"ID":"6539aa64-4253-4ec8-aa9a-0033341b3c9d","Type":"ContainerDied","Data":"984cc5733d3843974781d97d58a00bbf56cb77f947b7a65fe85c96256429231c"} Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.185652 4857 scope.go:117] "RemoveContainer" containerID="02a4a49abe2881ac1fe5260942c00d6ef0fabe17b50812e62ac12ebd58067f18" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.217069 4857 scope.go:117] "RemoveContainer" containerID="3a3665513705a124ea0336fb353c3e5fdf01aa3c2e353330de8f5d06002ce455" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.673494 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk" (OuterVolumeSpecName: "kube-api-access-p96vk") pod "6539aa64-4253-4ec8-aa9a-0033341b3c9d" (UID: "6539aa64-4253-4ec8-aa9a-0033341b3c9d"). InnerVolumeSpecName "kube-api-access-p96vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:32:47 crc kubenswrapper[4857]: I0318 15:32:47.701423 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p96vk\" (UniqueName: \"kubernetes.io/projected/6539aa64-4253-4ec8-aa9a-0033341b3c9d-kube-api-access-p96vk\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.196424 4857 scope.go:117] "RemoveContainer" containerID="fb87bed1dbf65310bfec3667bb8ba5f6f1603f4041855f77bc06115f9bcb09ef" Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.305608 4857 generic.go:334] "Generic (PLEG): container finished" podID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerID="97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f" exitCode=0 Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.305916 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerDied","Data":"97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f"} Mar 18 15:32:48 crc kubenswrapper[4857]: E0318 15:32:48.520608 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de3c0b_498d_4ba7_b8b8_13d66e61ae59.slice/crio-conmon-97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de3c0b_498d_4ba7_b8b8_13d66e61ae59.slice/crio-97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f.scope\": RecentStats: unable to find data in memory cache]" Mar 18 15:32:48 crc kubenswrapper[4857]: E0318 15:32:48.520662 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de3c0b_498d_4ba7_b8b8_13d66e61ae59.slice/crio-97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de3c0b_498d_4ba7_b8b8_13d66e61ae59.slice/crio-conmon-97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6539aa64_4253_4ec8_aa9a_0033341b3c9d.slice/crio-984cc5733d3843974781d97d58a00bbf56cb77f947b7a65fe85c96256429231c\": RecentStats: unable to find data in memory cache]" Mar 18 15:32:48 crc kubenswrapper[4857]: E0318 15:32:48.527818 4857 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2de3c0b_498d_4ba7_b8b8_13d66e61ae59.slice/crio-conmon-97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f.scope\": RecentStats: unable to find data in memory cache]" Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.653305 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.718979 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ccrhf"] Mar 18 15:32:48 crc kubenswrapper[4857]: I0318 15:32:48.978407 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.090576 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content\") pod \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.090691 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities\") pod \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.091376 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92rp7\" (UniqueName: \"kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7\") pod \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\" (UID: \"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59\") " Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.093068 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities" (OuterVolumeSpecName: "utilities") pod "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" (UID: "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.099626 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7" (OuterVolumeSpecName: "kube-api-access-92rp7") pod "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" (UID: "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59"). InnerVolumeSpecName "kube-api-access-92rp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.146835 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" (UID: "e2de3c0b-498d-4ba7-b8b8-13d66e61ae59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.607314 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.617351 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.618879 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swdws" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.621802 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92rp7\" (UniqueName: \"kubernetes.io/projected/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59-kube-api-access-92rp7\") on node \"crc\" DevicePath \"\"" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.645459 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" path="/var/lib/kubelet/pods/6539aa64-4253-4ec8-aa9a-0033341b3c9d/volumes" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.652562 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swdws" event={"ID":"e2de3c0b-498d-4ba7-b8b8-13d66e61ae59","Type":"ContainerDied","Data":"bb70eda0bc045e921ca94bb7571277b744c01993601dc2b51e7276c0b9496fb6"} Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.652646 4857 scope.go:117] "RemoveContainer" containerID="97c90135d763953732e873165ee977f7569d8d1a53bb3d5bc46a0238b70b564f" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.694080 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.699684 4857 scope.go:117] "RemoveContainer" containerID="c1d23a226f0cc065668fce32e8c6e891d65eb208000a996666c02eab3190e17b" Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.709507 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-swdws"] Mar 18 15:32:49 crc kubenswrapper[4857]: I0318 15:32:49.746933 4857 scope.go:117] "RemoveContainer" containerID="f84aef82b7c8cc73a434eb34b20d6e26b79e06122f47937415d32e78b38e6b8e" Mar 18 15:32:51 crc kubenswrapper[4857]: I0318 15:32:51.182004 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" path="/var/lib/kubelet/pods/e2de3c0b-498d-4ba7-b8b8-13d66e61ae59/volumes" Mar 18 15:32:55 crc kubenswrapper[4857]: I0318 15:32:55.156361 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" probeResult="failure" output=< Mar 18 15:32:55 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:32:55 crc kubenswrapper[4857]: > Mar 18 15:32:59 crc kubenswrapper[4857]: I0318 15:32:59.932504 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-5kgxv" event={"ID":"37883785-3057-4faf-9dac-97d6b547801b","Type":"ContainerStarted","Data":"c8f09c73f10e410cc122138c0d441dd8a132c8d670180e50adafe6f30d5167d2"} Mar 18 15:32:59 crc kubenswrapper[4857]: I0318 15:32:59.970045 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2s484/crc-debug-5kgxv" podStartSLOduration=2.754871542 podStartE2EDuration="37.970009325s" podCreationTimestamp="2026-03-18 15:32:22 +0000 UTC" firstStartedPulling="2026-03-18 15:32:23.482967734 +0000 UTC m=+5527.612096181" lastFinishedPulling="2026-03-18 15:32:58.698105507 +0000 UTC m=+5562.827233964" observedRunningTime="2026-03-18 15:32:59.959433318 +0000 UTC m=+5564.088561775" watchObservedRunningTime="2026-03-18 15:32:59.970009325 +0000 UTC m=+5564.099137782" Mar 18 15:33:04 crc kubenswrapper[4857]: I0318 15:33:04.361648 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:33:04 crc kubenswrapper[4857]: I0318 15:33:04.448861 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:33:04 crc kubenswrapper[4857]: I0318 15:33:04.621666 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.046173 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r2m9n" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" containerID="cri-o://11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8" gracePeriod=2 Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.728505 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.838769 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsrdx\" (UniqueName: \"kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx\") pod \"c4f4b139-b1bb-4125-ada4-f153f05c6248\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.839317 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content\") pod \"c4f4b139-b1bb-4125-ada4-f153f05c6248\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.839374 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities\") pod \"c4f4b139-b1bb-4125-ada4-f153f05c6248\" (UID: \"c4f4b139-b1bb-4125-ada4-f153f05c6248\") " Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.840602 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities" (OuterVolumeSpecName: "utilities") pod "c4f4b139-b1bb-4125-ada4-f153f05c6248" (UID: "c4f4b139-b1bb-4125-ada4-f153f05c6248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.841032 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.855984 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx" (OuterVolumeSpecName: "kube-api-access-hsrdx") pod "c4f4b139-b1bb-4125-ada4-f153f05c6248" (UID: "c4f4b139-b1bb-4125-ada4-f153f05c6248"). InnerVolumeSpecName "kube-api-access-hsrdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.944402 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsrdx\" (UniqueName: \"kubernetes.io/projected/c4f4b139-b1bb-4125-ada4-f153f05c6248-kube-api-access-hsrdx\") on node \"crc\" DevicePath \"\"" Mar 18 15:33:07 crc kubenswrapper[4857]: I0318 15:33:07.947738 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4f4b139-b1bb-4125-ada4-f153f05c6248" (UID: "c4f4b139-b1bb-4125-ada4-f153f05c6248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.047883 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f4b139-b1bb-4125-ada4-f153f05c6248-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.062333 4857 generic.go:334] "Generic (PLEG): container finished" podID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerID="11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8" exitCode=0 Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.062502 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2m9n" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.062523 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerDied","Data":"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8"} Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.064043 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2m9n" event={"ID":"c4f4b139-b1bb-4125-ada4-f153f05c6248","Type":"ContainerDied","Data":"9faf05e0dda8b35254fee58b0ecbcf7a498c4d49a2b84641d6cb354f94b62aa8"} Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.064072 4857 scope.go:117] "RemoveContainer" containerID="11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.119331 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.131906 4857 scope.go:117] "RemoveContainer" containerID="c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.137774 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r2m9n"] Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.170924 4857 scope.go:117] "RemoveContainer" containerID="3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.221548 4857 scope.go:117] "RemoveContainer" containerID="11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8" Mar 18 15:33:08 crc kubenswrapper[4857]: E0318 15:33:08.222588 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8\": container with ID starting with 11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8 not found: ID does not exist" containerID="11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.222778 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8"} err="failed to get container status \"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8\": rpc error: code = NotFound desc = could not find container \"11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8\": container with ID starting with 11513a9e456298212d9517642cb78210928e4e5a19d3a58daf172b5e12b1e7f8 not found: ID does not exist" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.222893 4857 scope.go:117] "RemoveContainer" containerID="c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da" Mar 18 15:33:08 crc kubenswrapper[4857]: E0318 15:33:08.223364 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da\": container with ID starting with c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da not found: ID does not exist" containerID="c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.223401 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da"} err="failed to get container status \"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da\": rpc error: code = NotFound desc = could not find container \"c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da\": container with ID starting with c1b54e591a876945931fcd8fd9912b90e81e3d14ac7056c7dfee7776c8eed3da not found: ID does not exist" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.223433 4857 scope.go:117] "RemoveContainer" containerID="3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01" Mar 18 15:33:08 crc kubenswrapper[4857]: E0318 15:33:08.223969 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01\": container with ID starting with 3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01 not found: ID does not exist" containerID="3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01" Mar 18 15:33:08 crc kubenswrapper[4857]: I0318 15:33:08.224025 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01"} err="failed to get container status \"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01\": rpc error: code = NotFound desc = could not find container \"3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01\": container with ID starting with 3c8bc1ca0f094ce51a0362274a565e4cb0b0d5c2044a620a08b862ec5f683e01 not found: ID does not exist" Mar 18 15:33:09 crc kubenswrapper[4857]: I0318 15:33:09.178828 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" path="/var/lib/kubelet/pods/c4f4b139-b1bb-4125-ada4-f153f05c6248/volumes" Mar 18 15:33:19 crc kubenswrapper[4857]: I0318 15:33:19.653018 4857 generic.go:334] "Generic (PLEG): container finished" podID="bc2369f0-d23b-4453-a74c-f8581c9f5cc0" containerID="783ccda3034bcd4060228c662b4bc26ab6b3a9b1ea6187056fac74f230912fb1" exitCode=0 Mar 18 15:33:19 crc kubenswrapper[4857]: I0318 15:33:19.653263 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" event={"ID":"bc2369f0-d23b-4453-a74c-f8581c9f5cc0","Type":"ContainerDied","Data":"783ccda3034bcd4060228c662b4bc26ab6b3a9b1ea6187056fac74f230912fb1"} Mar 18 15:33:20 crc kubenswrapper[4857]: I0318 15:33:20.686842 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" event={"ID":"bc2369f0-d23b-4453-a74c-f8581c9f5cc0","Type":"ContainerStarted","Data":"1d508c1017a9166960b064be7918464428d743dfc899dc71360306406133e2de"} Mar 18 15:33:27 crc kubenswrapper[4857]: I0318 15:33:27.038897 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:33:27 crc kubenswrapper[4857]: I0318 15:33:27.039409 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:33:37 crc kubenswrapper[4857]: I0318 15:33:37.618918 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 15:33:37 crc kubenswrapper[4857]: I0318 15:33:37.619435 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 15:33:57 crc kubenswrapper[4857]: I0318 15:33:57.041951 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:33:57 crc kubenswrapper[4857]: I0318 15:33:57.043081 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:33:57 crc kubenswrapper[4857]: I0318 15:33:57.640810 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 15:33:57 crc kubenswrapper[4857]: I0318 15:33:57.662274 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6f67489d6c-zwgbg" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.223804 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564134-tqxnb"] Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232627 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232672 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232702 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232709 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232730 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232738 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232794 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232801 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232818 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232825 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="extract-content" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232844 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232852 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232865 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232870 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232897 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232903 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: E0318 15:34:00.232914 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.232920 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="extract-utilities" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.234486 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f4b139-b1bb-4125-ada4-f153f05c6248" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.234526 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2de3c0b-498d-4ba7-b8b8-13d66e61ae59" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.234547 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="6539aa64-4253-4ec8-aa9a-0033341b3c9d" containerName="registry-server" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.236094 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.243870 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.247124 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.247426 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.300881 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564134-tqxnb"] Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.324879 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2c5\" (UniqueName: \"kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5\") pod \"auto-csr-approver-29564134-tqxnb\" (UID: \"397821a2-be75-4f0d-a83f-f61eb459c9cb\") " pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.427508 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz2c5\" (UniqueName: \"kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5\") pod \"auto-csr-approver-29564134-tqxnb\" (UID: \"397821a2-be75-4f0d-a83f-f61eb459c9cb\") " pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.449876 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz2c5\" (UniqueName: \"kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5\") pod \"auto-csr-approver-29564134-tqxnb\" (UID: \"397821a2-be75-4f0d-a83f-f61eb459c9cb\") " pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:00 crc kubenswrapper[4857]: I0318 15:34:00.565608 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:01 crc kubenswrapper[4857]: W0318 15:34:01.711981 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod397821a2_be75_4f0d_a83f_f61eb459c9cb.slice/crio-9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e WatchSource:0}: Error finding container 9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e: Status 404 returned error can't find the container with id 9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e Mar 18 15:34:01 crc kubenswrapper[4857]: I0318 15:34:01.723270 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564134-tqxnb"] Mar 18 15:34:02 crc kubenswrapper[4857]: I0318 15:34:02.429858 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" event={"ID":"397821a2-be75-4f0d-a83f-f61eb459c9cb","Type":"ContainerStarted","Data":"9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e"} Mar 18 15:34:04 crc kubenswrapper[4857]: I0318 15:34:04.461191 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" event={"ID":"397821a2-be75-4f0d-a83f-f61eb459c9cb","Type":"ContainerStarted","Data":"5e6269276411348d1e0ea381ebd471d107e3d9ea3fd40c71631feabb31e05f98"} Mar 18 15:34:04 crc kubenswrapper[4857]: I0318 15:34:04.490826 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" podStartSLOduration=3.058587296 podStartE2EDuration="4.490742814s" podCreationTimestamp="2026-03-18 15:34:00 +0000 UTC" firstStartedPulling="2026-03-18 15:34:01.720695645 +0000 UTC m=+5625.849824112" lastFinishedPulling="2026-03-18 15:34:03.152851183 +0000 UTC m=+5627.281979630" observedRunningTime="2026-03-18 15:34:04.480544757 +0000 UTC m=+5628.609673214" watchObservedRunningTime="2026-03-18 15:34:04.490742814 +0000 UTC m=+5628.619871271" Mar 18 15:34:07 crc kubenswrapper[4857]: I0318 15:34:07.675974 4857 generic.go:334] "Generic (PLEG): container finished" podID="397821a2-be75-4f0d-a83f-f61eb459c9cb" containerID="5e6269276411348d1e0ea381ebd471d107e3d9ea3fd40c71631feabb31e05f98" exitCode=0 Mar 18 15:34:07 crc kubenswrapper[4857]: I0318 15:34:07.676049 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" event={"ID":"397821a2-be75-4f0d-a83f-f61eb459c9cb","Type":"ContainerDied","Data":"5e6269276411348d1e0ea381ebd471d107e3d9ea3fd40c71631feabb31e05f98"} Mar 18 15:34:08 crc kubenswrapper[4857]: I0318 15:34:08.717064 4857 generic.go:334] "Generic (PLEG): container finished" podID="37883785-3057-4faf-9dac-97d6b547801b" containerID="c8f09c73f10e410cc122138c0d441dd8a132c8d670180e50adafe6f30d5167d2" exitCode=0 Mar 18 15:34:08 crc kubenswrapper[4857]: I0318 15:34:08.717101 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-5kgxv" event={"ID":"37883785-3057-4faf-9dac-97d6b547801b","Type":"ContainerDied","Data":"c8f09c73f10e410cc122138c0d441dd8a132c8d670180e50adafe6f30d5167d2"} Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.731577 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" event={"ID":"397821a2-be75-4f0d-a83f-f61eb459c9cb","Type":"ContainerDied","Data":"9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e"} Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.732006 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dec428c762bc420fc9ae65b35a7d56a091c2fe2acdf19d307c1eca499887d7e" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.812475 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.824283 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.830873 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host\") pod \"37883785-3057-4faf-9dac-97d6b547801b\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.830941 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz2c5\" (UniqueName: \"kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5\") pod \"397821a2-be75-4f0d-a83f-f61eb459c9cb\" (UID: \"397821a2-be75-4f0d-a83f-f61eb459c9cb\") " Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.830967 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host" (OuterVolumeSpecName: "host") pod "37883785-3057-4faf-9dac-97d6b547801b" (UID: "37883785-3057-4faf-9dac-97d6b547801b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.831185 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7mrk\" (UniqueName: \"kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk\") pod \"37883785-3057-4faf-9dac-97d6b547801b\" (UID: \"37883785-3057-4faf-9dac-97d6b547801b\") " Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.832031 4857 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/37883785-3057-4faf-9dac-97d6b547801b-host\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.868180 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk" (OuterVolumeSpecName: "kube-api-access-m7mrk") pod "37883785-3057-4faf-9dac-97d6b547801b" (UID: "37883785-3057-4faf-9dac-97d6b547801b"). InnerVolumeSpecName "kube-api-access-m7mrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.873585 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5" (OuterVolumeSpecName: "kube-api-access-vz2c5") pod "397821a2-be75-4f0d-a83f-f61eb459c9cb" (UID: "397821a2-be75-4f0d-a83f-f61eb459c9cb"). InnerVolumeSpecName "kube-api-access-vz2c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.911021 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2s484/crc-debug-5kgxv"] Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.922924 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2s484/crc-debug-5kgxv"] Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.934977 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7mrk\" (UniqueName: \"kubernetes.io/projected/37883785-3057-4faf-9dac-97d6b547801b-kube-api-access-m7mrk\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:09 crc kubenswrapper[4857]: I0318 15:34:09.935012 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz2c5\" (UniqueName: \"kubernetes.io/projected/397821a2-be75-4f0d-a83f-f61eb459c9cb-kube-api-access-vz2c5\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:10 crc kubenswrapper[4857]: I0318 15:34:10.898241 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564134-tqxnb" Mar 18 15:34:10 crc kubenswrapper[4857]: I0318 15:34:10.898573 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca27e93aafb49459b4afa012a73304a2f6a0c4c83bbdfe13bd4e6acbbae8beac" Mar 18 15:34:10 crc kubenswrapper[4857]: I0318 15:34:10.898606 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-5kgxv" Mar 18 15:34:10 crc kubenswrapper[4857]: I0318 15:34:10.964358 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564128-rfbbs"] Mar 18 15:34:10 crc kubenswrapper[4857]: I0318 15:34:10.975438 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564128-rfbbs"] Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.145344 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2s484/crc-debug-29jmk"] Mar 18 15:34:11 crc kubenswrapper[4857]: E0318 15:34:11.147243 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397821a2-be75-4f0d-a83f-f61eb459c9cb" containerName="oc" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.147359 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="397821a2-be75-4f0d-a83f-f61eb459c9cb" containerName="oc" Mar 18 15:34:11 crc kubenswrapper[4857]: E0318 15:34:11.147377 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37883785-3057-4faf-9dac-97d6b547801b" containerName="container-00" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.147385 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="37883785-3057-4faf-9dac-97d6b547801b" containerName="container-00" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.147945 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="37883785-3057-4faf-9dac-97d6b547801b" containerName="container-00" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.147998 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="397821a2-be75-4f0d-a83f-f61eb459c9cb" containerName="oc" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.150942 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.153816 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2s484"/"default-dockercfg-9dtng" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.187290 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37883785-3057-4faf-9dac-97d6b547801b" path="/var/lib/kubelet/pods/37883785-3057-4faf-9dac-97d6b547801b/volumes" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.190862 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5fbed3-75da-4a41-b46e-12e195588151" path="/var/lib/kubelet/pods/8e5fbed3-75da-4a41-b46e-12e195588151/volumes" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.228920 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnkm\" (UniqueName: \"kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.228994 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.332076 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnkm\" (UniqueName: \"kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.332156 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.334262 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.372672 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnkm\" (UniqueName: \"kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm\") pod \"crc-debug-29jmk\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.477508 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.923735 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-29jmk" event={"ID":"7304cdfc-854a-44ef-917d-fbdc7485b138","Type":"ContainerStarted","Data":"c4a987a43bf1e55f183691cfb46c3a12911eba7415a969d6e0497abea99c08c0"} Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.924100 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-29jmk" event={"ID":"7304cdfc-854a-44ef-917d-fbdc7485b138","Type":"ContainerStarted","Data":"1c03a3e9b246c1a8ff9e4ff4e151ec7850cb9d0ff1660995ed2fb87602f8a317"} Mar 18 15:34:11 crc kubenswrapper[4857]: I0318 15:34:11.978342 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2s484/crc-debug-29jmk" podStartSLOduration=0.978311529 podStartE2EDuration="978.311529ms" podCreationTimestamp="2026-03-18 15:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 15:34:11.950091308 +0000 UTC m=+5636.079219765" watchObservedRunningTime="2026-03-18 15:34:11.978311529 +0000 UTC m=+5636.107439996" Mar 18 15:34:12 crc kubenswrapper[4857]: I0318 15:34:12.941442 4857 generic.go:334] "Generic (PLEG): container finished" podID="7304cdfc-854a-44ef-917d-fbdc7485b138" containerID="c4a987a43bf1e55f183691cfb46c3a12911eba7415a969d6e0497abea99c08c0" exitCode=0 Mar 18 15:34:12 crc kubenswrapper[4857]: I0318 15:34:12.942013 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-29jmk" event={"ID":"7304cdfc-854a-44ef-917d-fbdc7485b138","Type":"ContainerDied","Data":"c4a987a43bf1e55f183691cfb46c3a12911eba7415a969d6e0497abea99c08c0"} Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.366887 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.421264 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjnkm\" (UniqueName: \"kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm\") pod \"7304cdfc-854a-44ef-917d-fbdc7485b138\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.421817 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host\") pod \"7304cdfc-854a-44ef-917d-fbdc7485b138\" (UID: \"7304cdfc-854a-44ef-917d-fbdc7485b138\") " Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.422338 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host" (OuterVolumeSpecName: "host") pod "7304cdfc-854a-44ef-917d-fbdc7485b138" (UID: "7304cdfc-854a-44ef-917d-fbdc7485b138"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.424065 4857 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7304cdfc-854a-44ef-917d-fbdc7485b138-host\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.429661 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm" (OuterVolumeSpecName: "kube-api-access-fjnkm") pod "7304cdfc-854a-44ef-917d-fbdc7485b138" (UID: "7304cdfc-854a-44ef-917d-fbdc7485b138"). InnerVolumeSpecName "kube-api-access-fjnkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.450937 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2s484/crc-debug-29jmk"] Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.461956 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2s484/crc-debug-29jmk"] Mar 18 15:34:14 crc kubenswrapper[4857]: I0318 15:34:14.525953 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjnkm\" (UniqueName: \"kubernetes.io/projected/7304cdfc-854a-44ef-917d-fbdc7485b138-kube-api-access-fjnkm\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.179778 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7304cdfc-854a-44ef-917d-fbdc7485b138" path="/var/lib/kubelet/pods/7304cdfc-854a-44ef-917d-fbdc7485b138/volumes" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.236919 4857 scope.go:117] "RemoveContainer" containerID="c4a987a43bf1e55f183691cfb46c3a12911eba7415a969d6e0497abea99c08c0" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.237260 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-29jmk" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.644119 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2s484/crc-debug-92xbd"] Mar 18 15:34:15 crc kubenswrapper[4857]: E0318 15:34:15.644898 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304cdfc-854a-44ef-917d-fbdc7485b138" containerName="container-00" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.644912 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304cdfc-854a-44ef-917d-fbdc7485b138" containerName="container-00" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.645189 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304cdfc-854a-44ef-917d-fbdc7485b138" containerName="container-00" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.646115 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.649221 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2s484"/"default-dockercfg-9dtng" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.760521 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwc4\" (UniqueName: \"kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.760599 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.863586 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwc4\" (UniqueName: \"kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.863676 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.863929 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.888915 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwc4\" (UniqueName: \"kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4\") pod \"crc-debug-92xbd\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:15 crc kubenswrapper[4857]: I0318 15:34:15.964085 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:16 crc kubenswrapper[4857]: W0318 15:34:16.501836 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55da6c16_7438_4bff_b0d3_6c0a994a3e4e.slice/crio-e102fa51b69ec911de7aa1868d0cb17a6f9766a7b9f2276e9311b0247bdc4ce0 WatchSource:0}: Error finding container e102fa51b69ec911de7aa1868d0cb17a6f9766a7b9f2276e9311b0247bdc4ce0: Status 404 returned error can't find the container with id e102fa51b69ec911de7aa1868d0cb17a6f9766a7b9f2276e9311b0247bdc4ce0 Mar 18 15:34:17 crc kubenswrapper[4857]: I0318 15:34:17.521198 4857 generic.go:334] "Generic (PLEG): container finished" podID="55da6c16-7438-4bff-b0d3-6c0a994a3e4e" containerID="f517d9c84f653aea44fa18fddf0392e6b8cce63446bafb8dd04f8cfe4d78a699" exitCode=0 Mar 18 15:34:17 crc kubenswrapper[4857]: I0318 15:34:17.521253 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-92xbd" event={"ID":"55da6c16-7438-4bff-b0d3-6c0a994a3e4e","Type":"ContainerDied","Data":"f517d9c84f653aea44fa18fddf0392e6b8cce63446bafb8dd04f8cfe4d78a699"} Mar 18 15:34:17 crc kubenswrapper[4857]: I0318 15:34:17.521497 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/crc-debug-92xbd" event={"ID":"55da6c16-7438-4bff-b0d3-6c0a994a3e4e","Type":"ContainerStarted","Data":"e102fa51b69ec911de7aa1868d0cb17a6f9766a7b9f2276e9311b0247bdc4ce0"} Mar 18 15:34:17 crc kubenswrapper[4857]: I0318 15:34:17.598193 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2s484/crc-debug-92xbd"] Mar 18 15:34:17 crc kubenswrapper[4857]: I0318 15:34:17.612902 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2s484/crc-debug-92xbd"] Mar 18 15:34:18 crc kubenswrapper[4857]: I0318 15:34:18.981980 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.004103 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host\") pod \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.004185 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbwc4\" (UniqueName: \"kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4\") pod \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\" (UID: \"55da6c16-7438-4bff-b0d3-6c0a994a3e4e\") " Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.004266 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host" (OuterVolumeSpecName: "host") pod "55da6c16-7438-4bff-b0d3-6c0a994a3e4e" (UID: "55da6c16-7438-4bff-b0d3-6c0a994a3e4e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.005556 4857 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-host\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.017133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4" (OuterVolumeSpecName: "kube-api-access-cbwc4") pod "55da6c16-7438-4bff-b0d3-6c0a994a3e4e" (UID: "55da6c16-7438-4bff-b0d3-6c0a994a3e4e"). InnerVolumeSpecName "kube-api-access-cbwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.108574 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbwc4\" (UniqueName: \"kubernetes.io/projected/55da6c16-7438-4bff-b0d3-6c0a994a3e4e-kube-api-access-cbwc4\") on node \"crc\" DevicePath \"\"" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.181350 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55da6c16-7438-4bff-b0d3-6c0a994a3e4e" path="/var/lib/kubelet/pods/55da6c16-7438-4bff-b0d3-6c0a994a3e4e/volumes" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.827518 4857 scope.go:117] "RemoveContainer" containerID="f517d9c84f653aea44fa18fddf0392e6b8cce63446bafb8dd04f8cfe4d78a699" Mar 18 15:34:19 crc kubenswrapper[4857]: I0318 15:34:19.827977 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/crc-debug-92xbd" Mar 18 15:34:27 crc kubenswrapper[4857]: I0318 15:34:27.168071 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:34:27 crc kubenswrapper[4857]: I0318 15:34:27.171628 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:34:27 crc kubenswrapper[4857]: I0318 15:34:27.260580 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:34:27 crc kubenswrapper[4857]: I0318 15:34:27.261608 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:34:27 crc kubenswrapper[4857]: I0318 15:34:27.261689 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7" gracePeriod=600 Mar 18 15:34:28 crc kubenswrapper[4857]: I0318 15:34:28.452125 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7" exitCode=0 Mar 18 15:34:28 crc kubenswrapper[4857]: I0318 15:34:28.452688 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7"} Mar 18 15:34:28 crc kubenswrapper[4857]: I0318 15:34:28.452722 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb"} Mar 18 15:34:28 crc kubenswrapper[4857]: I0318 15:34:28.452748 4857 scope.go:117] "RemoveContainer" containerID="67aa0fc611230cc497eed0d52f02dd128e13115c6d888188f76b52aa8250335d" Mar 18 15:34:45 crc kubenswrapper[4857]: I0318 15:34:45.646166 4857 scope.go:117] "RemoveContainer" containerID="8b2505cad8b3bd8f45a8c1c64c413c1b7a6659cc1dbd6c3e92f5fee9220fd56d" Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.523258 4857 trace.go:236] Trace[645626058]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-vs9hw" (18-Mar-2026 15:34:54.992) (total time: 23530ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[645626058]: [23.530872089s] [23.530872089s] END Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.537846 4857 trace.go:236] Trace[613068523]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (18-Mar-2026 15:35:00.588) (total time: 17948ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[613068523]: [17.948874933s] [17.948874933s] END Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.562790 4857 trace.go:236] Trace[620467762]: "Calculate volume metrics of glance for pod openstack/glance-default-internal-api-0" (18-Mar-2026 15:35:07.648) (total time: 10914ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[620467762]: [10.914294396s] [10.914294396s] END Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.562821 4857 trace.go:236] Trace[1529404073]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (18-Mar-2026 15:34:41.047) (total time: 37515ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[1529404073]: [37.515234545s] [37.515234545s] END Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.585739 4857 trace.go:236] Trace[1663857519]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Mar-2026 15:34:59.309) (total time: 19276ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[1663857519]: [19.276098987s] [19.276098987s] END Mar 18 15:35:18 crc kubenswrapper[4857]: I0318 15:35:18.928991 4857 trace.go:236] Trace[1859058713]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (18-Mar-2026 15:35:05.931) (total time: 12997ms): Mar 18 15:35:18 crc kubenswrapper[4857]: Trace[1859058713]: [12.997274189s] [12.997274189s] END Mar 18 15:35:37 crc kubenswrapper[4857]: I0318 15:35:37.682300 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c6880f18-f2cd-43fa-8ef7-8f0d89744e3c/aodh-api/0.log" Mar 18 15:35:37 crc kubenswrapper[4857]: I0318 15:35:37.742596 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c6880f18-f2cd-43fa-8ef7-8f0d89744e3c/aodh-listener/0.log" Mar 18 15:35:37 crc kubenswrapper[4857]: I0318 15:35:37.756270 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c6880f18-f2cd-43fa-8ef7-8f0d89744e3c/aodh-evaluator/0.log" Mar 18 15:35:37 crc kubenswrapper[4857]: I0318 15:35:37.899710 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_c6880f18-f2cd-43fa-8ef7-8f0d89744e3c/aodh-notifier/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.025354 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-758d4bf778-sxwcw_40d7e3cc-c623-483b-bbd0-f88a2246cf7b/barbican-api/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.077670 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-758d4bf778-sxwcw_40d7e3cc-c623-483b-bbd0-f88a2246cf7b/barbican-api-log/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.232867 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cc94898c8-q6kp6_3cc875e0-0e5b-446b-8836-5c8b3ceb9736/barbican-keystone-listener/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.362290 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7cc94898c8-q6kp6_3cc875e0-0e5b-446b-8836-5c8b3ceb9736/barbican-keystone-listener-log/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.425643 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b8d49d4dc-q2jgf_cec7fb8b-0248-4c9b-ba87-9d0840a07ce7/barbican-worker/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.454599 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b8d49d4dc-q2jgf_cec7fb8b-0248-4c9b-ba87-9d0840a07ce7/barbican-worker-log/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.717882 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-l9vsq_f20fa8fb-3d4b-40c1-bcc4-6e5f7a362941/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:38 crc kubenswrapper[4857]: I0318 15:35:38.779478 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97a08b04-cfff-4c38-90d4-aa20b69ade73/ceilometer-central-agent/1.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.060637 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97a08b04-cfff-4c38-90d4-aa20b69ade73/ceilometer-notification-agent/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.100878 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97a08b04-cfff-4c38-90d4-aa20b69ade73/ceilometer-central-agent/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.122898 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97a08b04-cfff-4c38-90d4-aa20b69ade73/proxy-httpd/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.176245 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97a08b04-cfff-4c38-90d4-aa20b69ade73/sg-core/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.778876 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_38f691fd-1071-4bdd-a29a-e0b7ae81432e/cinder-api-log/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.858185 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_38f691fd-1071-4bdd-a29a-e0b7ae81432e/cinder-api/0.log" Mar 18 15:35:39 crc kubenswrapper[4857]: I0318 15:35:39.914322 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2dbb697-87e8-4c7f-bf29-a918e84fd78e/cinder-scheduler/1.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.054854 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2dbb697-87e8-4c7f-bf29-a918e84fd78e/cinder-scheduler/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.196827 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2dbb697-87e8-4c7f-bf29-a918e84fd78e/probe/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.201276 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-2dvst_e6c1f7fd-57a6-4598-8ea2-6986be701e93/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.553948 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dkfvs_332600d9-3b78-4b64-8cb2-97fbc6832fc4/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.613118 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-r9gxm_b3a981c6-60b8-4191-a6c1-111dc8997817/init/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.939057 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-r9gxm_b3a981c6-60b8-4191-a6c1-111dc8997817/dnsmasq-dns/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.976675 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vlpss_da0fb4e6-9c13-42e7-8771-3f0fc9d2045d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:40 crc kubenswrapper[4857]: I0318 15:35:40.998514 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-r9gxm_b3a981c6-60b8-4191-a6c1-111dc8997817/init/0.log" Mar 18 15:35:41 crc kubenswrapper[4857]: I0318 15:35:41.222555 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_49d4f556-2bf9-4361-989b-e4d191f7fee4/glance-log/0.log" Mar 18 15:35:41 crc kubenswrapper[4857]: I0318 15:35:41.303919 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_49d4f556-2bf9-4361-989b-e4d191f7fee4/glance-httpd/0.log" Mar 18 15:35:41 crc kubenswrapper[4857]: I0318 15:35:41.525329 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4cc31bb4-6fa0-41fe-b292-9a9de2d9a581/glance-log/0.log" Mar 18 15:35:41 crc kubenswrapper[4857]: I0318 15:35:41.578985 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_4cc31bb4-6fa0-41fe-b292-9a9de2d9a581/glance-httpd/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.275269 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-65d99fb45d-wdcmd_f9fcb1a7-8c36-4029-8711-4d48a03468c3/heat-api/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.323649 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5cc4978d9b-95h9v_5fd5571e-79f0-4266-9b29-c60ea73a918d/heat-engine/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.485810 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-dq4pf_285cebdc-6e07-4290-84bd-37fe6df151e4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.509646 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5fbb7cf74b-jgtw7_8b588aa2-d372-4e34-9bff-4bf820185b48/heat-cfnapi/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.569101 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jkwbn_de89a7e5-ef74-441a-8af0-c8879e1bebdb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:42 crc kubenswrapper[4857]: I0318 15:35:42.917365 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29564101-wqh7w_a3189582-ce5c-4457-b558-181d14d1e6e8/keystone-cron/0.log" Mar 18 15:35:43 crc kubenswrapper[4857]: I0318 15:35:43.153476 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6bddf5f585-25djb_d94f0649-a747-48de-bb74-4db5047cf5d5/keystone-api/0.log" Mar 18 15:35:43 crc kubenswrapper[4857]: I0318 15:35:43.290409 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d86ecda9-1d3b-4efe-9778-30f3f6803c11/kube-state-metrics/0.log" Mar 18 15:35:43 crc kubenswrapper[4857]: I0318 15:35:43.470510 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-z2mf9_cfcf59a9-242d-4953-9276-a0d09a4d3030/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:43 crc kubenswrapper[4857]: I0318 15:35:43.642896 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-hstgx_edcb5d9a-650c-4199-89c1-5f077d3f217f/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:43 crc kubenswrapper[4857]: I0318 15:35:43.958342 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_f2d8ee5d-ebb0-464d-8f52-f1bd67b9175c/mysqld-exporter/0.log" Mar 18 15:35:44 crc kubenswrapper[4857]: I0318 15:35:44.224769 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bf8cc5fd5-pf2nl_de9f5a39-f6e4-496d-9a40-a8b8716eaa57/neutron-api/0.log" Mar 18 15:35:44 crc kubenswrapper[4857]: I0318 15:35:44.283331 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bf8cc5fd5-pf2nl_de9f5a39-f6e4-496d-9a40-a8b8716eaa57/neutron-httpd/0.log" Mar 18 15:35:44 crc kubenswrapper[4857]: I0318 15:35:44.349599 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m6rbc_ed495323-60c5-4ea1-8990-0d4c3910b7ac/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.057850 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c4cd7203-ecc0-4c47-abd4-de4a574f24ba/nova-api-log/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.061324 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_09373ed6-5d90-471d-a45c-4f39dc46caf8/nova-cell0-conductor-conductor/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.474973 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_50e494dd-1112-4b7e-b816-50a04847f133/nova-cell1-conductor-conductor/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.570316 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_221ab7cd-f76f-4e82-bc62-54fd96aacde6/nova-cell1-novncproxy-novncproxy/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.769039 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c4cd7203-ecc0-4c47-abd4-de4a574f24ba/nova-api-api/0.log" Mar 18 15:35:45 crc kubenswrapper[4857]: I0318 15:35:45.872194 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-nm64z_9608ecda-882a-47d8-97e1-73eace0dfcb7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:46 crc kubenswrapper[4857]: I0318 15:35:46.081826 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_019058fb-aa78-4be6-9d60-ebe5a0ce7b67/nova-metadata-log/0.log" Mar 18 15:35:46 crc kubenswrapper[4857]: I0318 15:35:46.325561 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_edcbb6bb-f0dc-4a1b-8bdc-0941cb35dc47/nova-scheduler-scheduler/0.log" Mar 18 15:35:46 crc kubenswrapper[4857]: I0318 15:35:46.481270 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f695aad9-3bb2-4529-bb2b-5c36787464c1/mysql-bootstrap/0.log" Mar 18 15:35:46 crc kubenswrapper[4857]: I0318 15:35:46.734922 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_019058fb-aa78-4be6-9d60-ebe5a0ce7b67/nova-metadata-metadata/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.052807 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f695aad9-3bb2-4529-bb2b-5c36787464c1/mysql-bootstrap/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.149319 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f695aad9-3bb2-4529-bb2b-5c36787464c1/galera/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.153630 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f695aad9-3bb2-4529-bb2b-5c36787464c1/galera/1.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.337130 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f76ea184-35e0-4df6-8c6e-34196ccd7901/mysql-bootstrap/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.673777 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f76ea184-35e0-4df6-8c6e-34196ccd7901/mysql-bootstrap/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.715618 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f76ea184-35e0-4df6-8c6e-34196ccd7901/galera/0.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.791283 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f76ea184-35e0-4df6-8c6e-34196ccd7901/galera/1.log" Mar 18 15:35:47 crc kubenswrapper[4857]: I0318 15:35:47.964644 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_40f263c7-0bb2-473d-a658-41b6104343a9/openstackclient/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.033550 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jvjlg_635e665d-2bdc-4e46-913d-0362aa4d4e3d/ovn-controller/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.236799 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qs7p9_fb755c3a-d583-40d1-a67d-1af716edbadb/openstack-network-exporter/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.408997 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7z7fh_583a3a2f-591c-4cb4-96d7-3f1ad08441a8/ovsdb-server-init/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.658940 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7z7fh_583a3a2f-591c-4cb4-96d7-3f1ad08441a8/ovsdb-server-init/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.800232 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7z7fh_583a3a2f-591c-4cb4-96d7-3f1ad08441a8/ovsdb-server/0.log" Mar 18 15:35:48 crc kubenswrapper[4857]: I0318 15:35:48.853332 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7z7fh_583a3a2f-591c-4cb4-96d7-3f1ad08441a8/ovs-vswitchd/0.log" Mar 18 15:35:49 crc kubenswrapper[4857]: I0318 15:35:49.177709 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-847zj_62857fb3-1258-4014-9345-dfd35035f61f/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:49 crc kubenswrapper[4857]: I0318 15:35:49.182298 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ceaa02e5-9dc8-4200-a963-075794c1e822/ovn-northd/0.log" Mar 18 15:35:49 crc kubenswrapper[4857]: I0318 15:35:49.285797 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ceaa02e5-9dc8-4200-a963-075794c1e822/openstack-network-exporter/0.log" Mar 18 15:35:49 crc kubenswrapper[4857]: I0318 15:35:49.490051 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0/openstack-network-exporter/0.log" Mar 18 15:35:50 crc kubenswrapper[4857]: I0318 15:35:50.356590 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_82585f8a-7069-47cb-b10e-2c83903ddc08/openstack-network-exporter/0.log" Mar 18 15:35:50 crc kubenswrapper[4857]: I0318 15:35:50.364497 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_75dc7be5-1a0a-4b0b-a33a-1a2a852ccde0/ovsdbserver-nb/0.log" Mar 18 15:35:50 crc kubenswrapper[4857]: I0318 15:35:50.373513 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_82585f8a-7069-47cb-b10e-2c83903ddc08/ovsdbserver-sb/0.log" Mar 18 15:35:50 crc kubenswrapper[4857]: I0318 15:35:50.724422 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bd6fd9d7b-xrcmg_744c80f0-c04e-48e5-a6ae-8fe7ae2f5775/placement-api/0.log" Mar 18 15:35:50 crc kubenswrapper[4857]: I0318 15:35:50.829035 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5bd6fd9d7b-xrcmg_744c80f0-c04e-48e5-a6ae-8fe7ae2f5775/placement-log/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.254961 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_117d706b-860f-4f17-8f2b-5d27b7cdfe61/init-config-reloader/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.475664 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_117d706b-860f-4f17-8f2b-5d27b7cdfe61/init-config-reloader/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.612816 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_117d706b-860f-4f17-8f2b-5d27b7cdfe61/config-reloader/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.658426 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_117d706b-860f-4f17-8f2b-5d27b7cdfe61/thanos-sidecar/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.661532 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_117d706b-860f-4f17-8f2b-5d27b7cdfe61/prometheus/0.log" Mar 18 15:35:51 crc kubenswrapper[4857]: I0318 15:35:51.885085 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf037310-f1c6-404e-b55a-f23c33b43373/setup-container/0.log" Mar 18 15:35:52 crc kubenswrapper[4857]: I0318 15:35:52.116728 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf037310-f1c6-404e-b55a-f23c33b43373/rabbitmq/0.log" Mar 18 15:35:52 crc kubenswrapper[4857]: I0318 15:35:52.184590 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_754a7e75-92a0-4b06-a81d-f00c6cf9957f/setup-container/0.log" Mar 18 15:35:52 crc kubenswrapper[4857]: I0318 15:35:52.240293 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf037310-f1c6-404e-b55a-f23c33b43373/setup-container/0.log" Mar 18 15:35:52 crc kubenswrapper[4857]: I0318 15:35:52.976940 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_754a7e75-92a0-4b06-a81d-f00c6cf9957f/setup-container/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.010712 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_bffd47eb-3c88-41b8-bda7-f885b44d3ee8/setup-container/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.094051 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_754a7e75-92a0-4b06-a81d-f00c6cf9957f/rabbitmq/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.302546 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_bffd47eb-3c88-41b8-bda7-f885b44d3ee8/setup-container/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.432669 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_e447043a-8fa6-4b8c-b103-57fd3b484088/setup-container/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.467332 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_bffd47eb-3c88-41b8-bda7-f885b44d3ee8/rabbitmq/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.750522 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_e447043a-8fa6-4b8c-b103-57fd3b484088/setup-container/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.765328 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-zq6m7_c8a69a18-5407-48c4-bbc6-d60a5824e8db/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:53 crc kubenswrapper[4857]: I0318 15:35:53.998376 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_e447043a-8fa6-4b8c-b103-57fd3b484088/rabbitmq/0.log" Mar 18 15:35:54 crc kubenswrapper[4857]: I0318 15:35:54.104218 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-t98d8_527d9c47-3f89-4cf8-a69e-a522189755e1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:54 crc kubenswrapper[4857]: I0318 15:35:54.275000 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6p8xw_055be889-b95b-4aab-8510-682080ae57fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:54 crc kubenswrapper[4857]: I0318 15:35:54.468876 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bggjv_24e3a693-0c83-4f91-94c2-9ea976d3cf90/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:54 crc kubenswrapper[4857]: I0318 15:35:54.552567 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fkrm8_71a9b71f-4dfc-49da-9953-2d1739ff480a/ssh-known-hosts-edpm-deployment/0.log" Mar 18 15:35:54 crc kubenswrapper[4857]: I0318 15:35:54.869070 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-975859b47-gfk64_559e9866-068c-4602-879b-6291b10302c1/proxy-server/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.085427 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-975859b47-gfk64_559e9866-068c-4602-879b-6291b10302c1/proxy-httpd/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.285632 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qmp52_04d9193e-1a5e-4943-9241-05e854fb24cb/swift-ring-rebalance/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.488528 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/account-auditor/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.643692 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/account-reaper/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.699130 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/account-replicator/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.812090 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/account-server/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.837196 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/container-auditor/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.919818 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/container-replicator/0.log" Mar 18 15:35:55 crc kubenswrapper[4857]: I0318 15:35:55.938504 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/container-server/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.082246 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/container-updater/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.197658 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/object-auditor/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.228343 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/object-replicator/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.242552 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/object-expirer/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.404129 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/object-server/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.483258 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/object-updater/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.497049 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/rsync/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.582216 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1ca61c04-f56b-42c4-99fe-daa7f80436f7/swift-recon-cron/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.870386 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pm2f9_bd20a145-8f96-4a05-b051-38f2e6edc1ad/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:56 crc kubenswrapper[4857]: I0318 15:35:56.908431 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-k8r5d_839c8978-90ec-42f6-9adb-6ca8ec295f61/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:57 crc kubenswrapper[4857]: I0318 15:35:57.264089 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_bad50738-6a0f-49a2-abd9-4ebd71bc9056/test-operator-logs-container/0.log" Mar 18 15:35:57 crc kubenswrapper[4857]: I0318 15:35:57.539680 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6jnvj_1360813c-f243-4286-b916-f690b79bd637/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 18 15:35:57 crc kubenswrapper[4857]: I0318 15:35:57.722126 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_18946755-ed18-4d4a-bd99-7bb08f42c91b/tempest-tests-tempest-tests-runner/0.log" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.227022 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564136-tnfpb"] Mar 18 15:36:00 crc kubenswrapper[4857]: E0318 15:36:00.228616 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55da6c16-7438-4bff-b0d3-6c0a994a3e4e" containerName="container-00" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.228645 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="55da6c16-7438-4bff-b0d3-6c0a994a3e4e" containerName="container-00" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.229209 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="55da6c16-7438-4bff-b0d3-6c0a994a3e4e" containerName="container-00" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.238727 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.244769 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.245488 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.245949 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.330175 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564136-tnfpb"] Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.339656 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whch6\" (UniqueName: \"kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6\") pod \"auto-csr-approver-29564136-tnfpb\" (UID: \"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9\") " pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.442227 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whch6\" (UniqueName: \"kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6\") pod \"auto-csr-approver-29564136-tnfpb\" (UID: \"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9\") " pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.483663 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whch6\" (UniqueName: \"kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6\") pod \"auto-csr-approver-29564136-tnfpb\" (UID: \"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9\") " pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:00 crc kubenswrapper[4857]: I0318 15:36:00.594217 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:01 crc kubenswrapper[4857]: I0318 15:36:01.328199 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564136-tnfpb"] Mar 18 15:36:01 crc kubenswrapper[4857]: I0318 15:36:01.977886 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" event={"ID":"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9","Type":"ContainerStarted","Data":"8146b136b5a73a67e9522496588010631148a1df1604ff17246f7bef006dfb84"} Mar 18 15:36:05 crc kubenswrapper[4857]: I0318 15:36:05.040888 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" event={"ID":"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9","Type":"ContainerStarted","Data":"4e0bc1f0dc40fdd9a215c26c2bde303e9b4af474a748823a5c10593cf5d5e626"} Mar 18 15:36:05 crc kubenswrapper[4857]: I0318 15:36:05.073252 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" podStartSLOduration=3.574373586 podStartE2EDuration="5.073206554s" podCreationTimestamp="2026-03-18 15:36:00 +0000 UTC" firstStartedPulling="2026-03-18 15:36:01.340521731 +0000 UTC m=+5745.469650188" lastFinishedPulling="2026-03-18 15:36:02.839354689 +0000 UTC m=+5746.968483156" observedRunningTime="2026-03-18 15:36:05.064293649 +0000 UTC m=+5749.193422106" watchObservedRunningTime="2026-03-18 15:36:05.073206554 +0000 UTC m=+5749.202335011" Mar 18 15:36:05 crc kubenswrapper[4857]: I0318 15:36:05.495182 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bf21e858-d9d3-448f-bc36-522cf6f7dc2d/memcached/0.log" Mar 18 15:36:06 crc kubenswrapper[4857]: I0318 15:36:06.053136 4857 generic.go:334] "Generic (PLEG): container finished" podID="9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" containerID="4e0bc1f0dc40fdd9a215c26c2bde303e9b4af474a748823a5c10593cf5d5e626" exitCode=0 Mar 18 15:36:06 crc kubenswrapper[4857]: I0318 15:36:06.053182 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" event={"ID":"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9","Type":"ContainerDied","Data":"4e0bc1f0dc40fdd9a215c26c2bde303e9b4af474a748823a5c10593cf5d5e626"} Mar 18 15:36:07 crc kubenswrapper[4857]: I0318 15:36:07.573847 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:07 crc kubenswrapper[4857]: I0318 15:36:07.632539 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whch6\" (UniqueName: \"kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6\") pod \"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9\" (UID: \"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9\") " Mar 18 15:36:07 crc kubenswrapper[4857]: I0318 15:36:07.660782 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6" (OuterVolumeSpecName: "kube-api-access-whch6") pod "9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" (UID: "9b9fdabc-fe50-430c-a8fb-b376cd3a31e9"). InnerVolumeSpecName "kube-api-access-whch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:36:07 crc kubenswrapper[4857]: I0318 15:36:07.735272 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whch6\" (UniqueName: \"kubernetes.io/projected/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9-kube-api-access-whch6\") on node \"crc\" DevicePath \"\"" Mar 18 15:36:08 crc kubenswrapper[4857]: I0318 15:36:08.081821 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" event={"ID":"9b9fdabc-fe50-430c-a8fb-b376cd3a31e9","Type":"ContainerDied","Data":"8146b136b5a73a67e9522496588010631148a1df1604ff17246f7bef006dfb84"} Mar 18 15:36:08 crc kubenswrapper[4857]: I0318 15:36:08.081873 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8146b136b5a73a67e9522496588010631148a1df1604ff17246f7bef006dfb84" Mar 18 15:36:08 crc kubenswrapper[4857]: I0318 15:36:08.081966 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564136-tnfpb" Mar 18 15:36:08 crc kubenswrapper[4857]: I0318 15:36:08.146801 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564130-g4dps"] Mar 18 15:36:08 crc kubenswrapper[4857]: I0318 15:36:08.160764 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564130-g4dps"] Mar 18 15:36:09 crc kubenswrapper[4857]: I0318 15:36:09.181260 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2c4b59-9fc5-4ec9-9189-60cb1e716f51" path="/var/lib/kubelet/pods/1b2c4b59-9fc5-4ec9-9189-60cb1e716f51/volumes" Mar 18 15:36:27 crc kubenswrapper[4857]: I0318 15:36:27.038517 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:36:27 crc kubenswrapper[4857]: I0318 15:36:27.039205 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:36:44 crc kubenswrapper[4857]: I0318 15:36:44.472150 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/util/0.log" Mar 18 15:36:44 crc kubenswrapper[4857]: I0318 15:36:44.805887 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/pull/0.log" Mar 18 15:36:44 crc kubenswrapper[4857]: I0318 15:36:44.818459 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/pull/0.log" Mar 18 15:36:44 crc kubenswrapper[4857]: I0318 15:36:44.854767 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/util/0.log" Mar 18 15:36:45 crc kubenswrapper[4857]: I0318 15:36:45.036680 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/pull/0.log" Mar 18 15:36:45 crc kubenswrapper[4857]: I0318 15:36:45.061910 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/extract/0.log" Mar 18 15:36:45 crc kubenswrapper[4857]: I0318 15:36:45.086061 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_484aa3d9c3a8164f147add19be293dd9d32025354848ad7e57fd528e10z6qw9_92987a54-b377-41c3-8c50-bc86e82f41c0/util/0.log" Mar 18 15:36:45 crc kubenswrapper[4857]: I0318 15:36:45.410135 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-smknr_b876d788-10af-45fb-95e6-37e7e127249f/manager/0.log" Mar 18 15:36:45 crc kubenswrapper[4857]: I0318 15:36:45.715331 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-ptv8b_8ffb9263-05b9-447d-a332-31f5f3312ea9/manager/0.log" Mar 18 15:36:46 crc kubenswrapper[4857]: I0318 15:36:46.085164 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-dmrdv_e160f13b-785a-46a2-adb4-fa92ce7c6ab7/manager/0.log" Mar 18 15:36:46 crc kubenswrapper[4857]: I0318 15:36:46.250988 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-fvz4f_cffafd39-a112-46ab-becf-ad58facd5712/manager/0.log" Mar 18 15:36:46 crc kubenswrapper[4857]: I0318 15:36:46.338165 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-fqnq2_01c6ffec-b474-4bfb-a282-484214bea129/manager/0.log" Mar 18 15:36:47 crc kubenswrapper[4857]: I0318 15:36:47.535862 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-kddxh_d567742c-e8c4-4c28-9aae-afb3527cd915/manager/1.log" Mar 18 15:36:47 crc kubenswrapper[4857]: I0318 15:36:47.682649 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-xjwdv_2fc1a575-873e-43b1-9707-bc6247ec8bbc/manager/0.log" Mar 18 15:36:47 crc kubenswrapper[4857]: I0318 15:36:47.786694 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-kddxh_d567742c-e8c4-4c28-9aae-afb3527cd915/manager/0.log" Mar 18 15:36:48 crc kubenswrapper[4857]: I0318 15:36:48.505685 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-xnh2t_56663366-8771-43d4-b5df-ef9b84b90a74/manager/0.log" Mar 18 15:36:48 crc kubenswrapper[4857]: I0318 15:36:48.657230 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-ltg7d_73a9b06c-5f5c-46f7-9548-28c5a9513a95/manager/0.log" Mar 18 15:36:48 crc kubenswrapper[4857]: I0318 15:36:48.743935 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-9m5mv_633285e4-04be-48d6-a496-642aa673be88/manager/0.log" Mar 18 15:36:48 crc kubenswrapper[4857]: I0318 15:36:48.990622 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-l4h6z_f86c8f25-0e6c-4911-87f8-7ff89a25a040/manager/0.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.096298 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-v6rv8_2d1893e2-6251-42ef-82d7-529e1f27ec4c/manager/0.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.323604 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-8glm4_7f57203c-7aa8-4db7-a1f1-973a59e8fb9e/manager/0.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.673691 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-8b4ps_d2cd8f0d-237c-4db5-b2c6-31c6d99018e4/manager/1.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.720853 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-8b4ps_d2cd8f0d-237c-4db5-b2c6-31c6d99018e4/manager/0.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.807080 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-jcmxv_f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4/manager/1.log" Mar 18 15:36:49 crc kubenswrapper[4857]: I0318 15:36:49.933835 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-jcmxv_f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4/manager/0.log" Mar 18 15:36:50 crc kubenswrapper[4857]: I0318 15:36:50.209787 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5847fcc4fb-mg28t_fdc9df02-49d3-4a40-ba9c-d6ef085abb04/operator/1.log" Mar 18 15:36:50 crc kubenswrapper[4857]: I0318 15:36:50.442744 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5847fcc4fb-mg28t_fdc9df02-49d3-4a40-ba9c-d6ef085abb04/operator/0.log" Mar 18 15:36:50 crc kubenswrapper[4857]: I0318 15:36:50.598507 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8cxcs_bd585d57-f586-4b7b-8c56-be04591b6bdd/registry-server/1.log" Mar 18 15:36:51 crc kubenswrapper[4857]: I0318 15:36:51.201391 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8cxcs_bd585d57-f586-4b7b-8c56-be04591b6bdd/registry-server/0.log" Mar 18 15:36:51 crc kubenswrapper[4857]: I0318 15:36:51.312034 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-grt7j_ede9ac94-86ad-47ad-9358-4c051ec447cc/manager/0.log" Mar 18 15:36:51 crc kubenswrapper[4857]: I0318 15:36:51.827618 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-nqn4p_ffdcecae-8dae-48b2-84d8-73deac76eeca/manager/0.log" Mar 18 15:36:51 crc kubenswrapper[4857]: I0318 15:36:51.985653 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8g8kw_d992ef23-4762-4349-b1e4-9f6c562a75ac/operator/0.log" Mar 18 15:36:52 crc kubenswrapper[4857]: I0318 15:36:52.088592 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-86872_32bbb0ed-6fc4-407a-82c6-d9be2ed6bb4d/manager/0.log" Mar 18 15:36:52 crc kubenswrapper[4857]: I0318 15:36:52.148762 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-f84d7fd4f-mpg2d_cf688963-c59d-4667-8589-150c82a1e4d3/manager/0.log" Mar 18 15:36:52 crc kubenswrapper[4857]: I0318 15:36:52.987191 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-qpr5j_bf950907-821d-4d28-a563-f9865d7df7f0/manager/0.log" Mar 18 15:36:53 crc kubenswrapper[4857]: I0318 15:36:53.087788 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-fjnbb_18b73b64-9eec-426b-86eb-6a1045a9d25c/manager/0.log" Mar 18 15:36:53 crc kubenswrapper[4857]: I0318 15:36:53.253966 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5b79d7bc79-hmbhp_bdf23497-4141-4f8f-859a-0d1e4f8c80f7/manager/0.log" Mar 18 15:36:57 crc kubenswrapper[4857]: I0318 15:36:57.038961 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:36:57 crc kubenswrapper[4857]: I0318 15:36:57.041128 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:37:19 crc kubenswrapper[4857]: I0318 15:37:19.027417 4857 scope.go:117] "RemoveContainer" containerID="3056c6d80f0412acc9e13233ec8ba0e3a011b9f4bc53d7744e986f37b7a49a10" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.039265 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.039893 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.039965 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.041050 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.041105 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" gracePeriod=600 Mar 18 15:37:27 crc kubenswrapper[4857]: E0318 15:37:27.202546 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.704650 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" exitCode=0 Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.705058 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb"} Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.705120 4857 scope.go:117] "RemoveContainer" containerID="1d4100150172b393d5bbdeda811346f8f1d21ed3b6fa9ff40f8f958ced2fb6d7" Mar 18 15:37:27 crc kubenswrapper[4857]: I0318 15:37:27.706076 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:37:27 crc kubenswrapper[4857]: E0318 15:37:27.706574 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:37:28 crc kubenswrapper[4857]: I0318 15:37:28.421155 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-7fk6f_2dfd5f25-d490-4570-86ed-bf436c585658/control-plane-machine-set-operator/0.log" Mar 18 15:37:28 crc kubenswrapper[4857]: I0318 15:37:28.615502 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5rwkm_096c78f1-127f-4281-81b4-22ff1fd40e04/kube-rbac-proxy/0.log" Mar 18 15:37:28 crc kubenswrapper[4857]: I0318 15:37:28.659760 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5rwkm_096c78f1-127f-4281-81b4-22ff1fd40e04/machine-api-operator/0.log" Mar 18 15:37:41 crc kubenswrapper[4857]: I0318 15:37:41.164750 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:37:41 crc kubenswrapper[4857]: E0318 15:37:41.165566 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:37:46 crc kubenswrapper[4857]: I0318 15:37:46.203288 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-sdd8s_22ba80af-7cf5-4581-abd9-b5078fb0bc48/cert-manager-controller/0.log" Mar 18 15:37:46 crc kubenswrapper[4857]: I0318 15:37:46.336207 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-mtghx_14fe2326-441d-48c7-b4df-cc067beaadff/cert-manager-cainjector/0.log" Mar 18 15:37:46 crc kubenswrapper[4857]: I0318 15:37:46.463179 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mrtkc_2d9b7b6d-9b28-4a50-8bda-458c3f8088c1/cert-manager-webhook/0.log" Mar 18 15:37:54 crc kubenswrapper[4857]: I0318 15:37:54.189423 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:37:54 crc kubenswrapper[4857]: E0318 15:37:54.190743 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.156894 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564138-62j4z"] Mar 18 15:38:00 crc kubenswrapper[4857]: E0318 15:38:00.157961 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" containerName="oc" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.157982 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" containerName="oc" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.158254 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" containerName="oc" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.159251 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.170806 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.170904 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.172786 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.179224 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564138-62j4z"] Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.203099 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdnd\" (UniqueName: \"kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd\") pod \"auto-csr-approver-29564138-62j4z\" (UID: \"a239f10e-d03f-498d-8026-504eb804ae3f\") " pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.306019 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdnd\" (UniqueName: \"kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd\") pod \"auto-csr-approver-29564138-62j4z\" (UID: \"a239f10e-d03f-498d-8026-504eb804ae3f\") " pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.327919 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdnd\" (UniqueName: \"kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd\") pod \"auto-csr-approver-29564138-62j4z\" (UID: \"a239f10e-d03f-498d-8026-504eb804ae3f\") " pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:00 crc kubenswrapper[4857]: I0318 15:38:00.483209 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:01 crc kubenswrapper[4857]: W0318 15:38:01.211050 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda239f10e_d03f_498d_8026_504eb804ae3f.slice/crio-97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b WatchSource:0}: Error finding container 97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b: Status 404 returned error can't find the container with id 97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b Mar 18 15:38:01 crc kubenswrapper[4857]: I0318 15:38:01.212486 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564138-62j4z"] Mar 18 15:38:01 crc kubenswrapper[4857]: I0318 15:38:01.221708 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:38:02 crc kubenswrapper[4857]: I0318 15:38:02.063832 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564138-62j4z" event={"ID":"a239f10e-d03f-498d-8026-504eb804ae3f","Type":"ContainerStarted","Data":"97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b"} Mar 18 15:38:04 crc kubenswrapper[4857]: I0318 15:38:04.242332 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564138-62j4z" event={"ID":"a239f10e-d03f-498d-8026-504eb804ae3f","Type":"ContainerStarted","Data":"7033f6d5fee3f3b1a8d5f1f7aae8947de825e63282628909824495df824a4a61"} Mar 18 15:38:04 crc kubenswrapper[4857]: I0318 15:38:04.276640 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564138-62j4z" podStartSLOduration=3.189569493 podStartE2EDuration="4.276609842s" podCreationTimestamp="2026-03-18 15:38:00 +0000 UTC" firstStartedPulling="2026-03-18 15:38:01.219230671 +0000 UTC m=+5865.348359128" lastFinishedPulling="2026-03-18 15:38:02.30627102 +0000 UTC m=+5866.435399477" observedRunningTime="2026-03-18 15:38:04.261204034 +0000 UTC m=+5868.390332491" watchObservedRunningTime="2026-03-18 15:38:04.276609842 +0000 UTC m=+5868.405738299" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.245725 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-gsgsv_d2eb84ee-b26f-4bdf-8887-d14ffea65a41/nmstate-console-plugin/0.log" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.256667 4857 generic.go:334] "Generic (PLEG): container finished" podID="a239f10e-d03f-498d-8026-504eb804ae3f" containerID="7033f6d5fee3f3b1a8d5f1f7aae8947de825e63282628909824495df824a4a61" exitCode=0 Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.256737 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564138-62j4z" event={"ID":"a239f10e-d03f-498d-8026-504eb804ae3f","Type":"ContainerDied","Data":"7033f6d5fee3f3b1a8d5f1f7aae8947de825e63282628909824495df824a4a61"} Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.579122 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dgb87_331d152e-70ee-44a9-8bba-7f9696545421/kube-rbac-proxy/0.log" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.581238 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tg9wd_3471c66b-ec38-4efc-b1ab-cbf281f8d424/nmstate-handler/0.log" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.684058 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dgb87_331d152e-70ee-44a9-8bba-7f9696545421/nmstate-metrics/0.log" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.824887 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-bjm7h_ac94e571-ed34-4042-8c90-f2f582d58b5e/nmstate-operator/0.log" Mar 18 15:38:05 crc kubenswrapper[4857]: I0318 15:38:05.937459 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-gwqfj_45ebdaa4-576e-40b7-810d-0f4fc570125d/nmstate-webhook/0.log" Mar 18 15:38:06 crc kubenswrapper[4857]: I0318 15:38:06.781522 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:06 crc kubenswrapper[4857]: I0318 15:38:06.833938 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzdnd\" (UniqueName: \"kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd\") pod \"a239f10e-d03f-498d-8026-504eb804ae3f\" (UID: \"a239f10e-d03f-498d-8026-504eb804ae3f\") " Mar 18 15:38:06 crc kubenswrapper[4857]: I0318 15:38:06.850481 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd" (OuterVolumeSpecName: "kube-api-access-vzdnd") pod "a239f10e-d03f-498d-8026-504eb804ae3f" (UID: "a239f10e-d03f-498d-8026-504eb804ae3f"). InnerVolumeSpecName "kube-api-access-vzdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:38:06 crc kubenswrapper[4857]: I0318 15:38:06.936651 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzdnd\" (UniqueName: \"kubernetes.io/projected/a239f10e-d03f-498d-8026-504eb804ae3f-kube-api-access-vzdnd\") on node \"crc\" DevicePath \"\"" Mar 18 15:38:07 crc kubenswrapper[4857]: I0318 15:38:07.282977 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564138-62j4z" event={"ID":"a239f10e-d03f-498d-8026-504eb804ae3f","Type":"ContainerDied","Data":"97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b"} Mar 18 15:38:07 crc kubenswrapper[4857]: I0318 15:38:07.283037 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d51af507c7a137769457f22f3989efedc58ff9d3288fdedeafb2945471772b" Mar 18 15:38:07 crc kubenswrapper[4857]: I0318 15:38:07.283152 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564138-62j4z" Mar 18 15:38:07 crc kubenswrapper[4857]: I0318 15:38:07.354971 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564132-4fgzl"] Mar 18 15:38:07 crc kubenswrapper[4857]: I0318 15:38:07.372554 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564132-4fgzl"] Mar 18 15:38:08 crc kubenswrapper[4857]: I0318 15:38:08.163954 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:38:08 crc kubenswrapper[4857]: E0318 15:38:08.164500 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:38:09 crc kubenswrapper[4857]: I0318 15:38:09.304393 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7f4d26-eaa4-4d54-8a3d-b912b9484318" path="/var/lib/kubelet/pods/2a7f4d26-eaa4-4d54-8a3d-b912b9484318/volumes" Mar 18 15:38:19 crc kubenswrapper[4857]: I0318 15:38:19.287677 4857 scope.go:117] "RemoveContainer" containerID="37bca6d3856c622b84ebe7e2ca4defaeac3b5df10687e315988baf50b8427dac" Mar 18 15:38:22 crc kubenswrapper[4857]: I0318 15:38:22.164941 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:38:22 crc kubenswrapper[4857]: E0318 15:38:22.166174 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:38:23 crc kubenswrapper[4857]: I0318 15:38:23.967368 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-86c8cb9b45-kxpht_e5ba6b5a-524d-488a-9435-5fea2c394e6a/kube-rbac-proxy/0.log" Mar 18 15:38:24 crc kubenswrapper[4857]: I0318 15:38:24.091574 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-86c8cb9b45-kxpht_e5ba6b5a-524d-488a-9435-5fea2c394e6a/manager/0.log" Mar 18 15:38:34 crc kubenswrapper[4857]: I0318 15:38:34.164404 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:38:34 crc kubenswrapper[4857]: E0318 15:38:34.165259 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.390872 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-lsp5b_501dc1bd-0a04-4aef-bff8-43c9e767215f/prometheus-operator/0.log" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.509941 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_518b89ef-5060-4ec2-9a2d-7c64fa3555a5/prometheus-operator-admission-webhook/0.log" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.596946 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47/prometheus-operator-admission-webhook/0.log" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.718145 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-5mw69_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59/operator/1.log" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.853993 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-5mw69_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59/operator/0.log" Mar 18 15:38:39 crc kubenswrapper[4857]: I0318 15:38:39.938331 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7f87b9b85b-lwdf5_6e52a810-35c4-49bb-a0f6-83accdb52311/observability-ui-dashboards/0.log" Mar 18 15:38:40 crc kubenswrapper[4857]: I0318 15:38:40.063815 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6c9d87fc97-ddtxj_79d3df2c-25f0-4e16-a39d-cc0d6a85277f/perses-operator/0.log" Mar 18 15:38:45 crc kubenswrapper[4857]: I0318 15:38:45.164415 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:38:45 crc kubenswrapper[4857]: E0318 15:38:45.165185 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:38:56 crc kubenswrapper[4857]: I0318 15:38:56.164348 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:38:56 crc kubenswrapper[4857]: E0318 15:38:56.165460 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:39:00 crc kubenswrapper[4857]: I0318 15:39:00.502790 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-66689c4bbf-wq7db_d0f7164f-530a-4171-9a18-cda5db7559c9/cluster-logging-operator/0.log" Mar 18 15:39:00 crc kubenswrapper[4857]: I0318 15:39:00.675008 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-8mmd4_5a9975f7-76d4-402f-aba1-0cd0c476aa9e/collector/0.log" Mar 18 15:39:00 crc kubenswrapper[4857]: I0318 15:39:00.751383 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_4da2f7e2-d9d9-42ff-b7b7-a129541ecc39/loki-compactor/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.089346 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-9c6b6d984-xjvbj_b4256ac3-3896-4c43-8d10-ca5ac43f4991/loki-distributor/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.148644 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-bl8th_9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e/gateway/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.173939 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-bl8th_9c9f048a-5cbb-4f1e-ac83-4ee827a48a0e/opa/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.397600 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-w5jpj_206851e1-412e-4888-9635-f8eca5aa579e/gateway/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.490388 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-fc6d448bf-w5jpj_206851e1-412e-4888-9635-f8eca5aa579e/opa/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.634606 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_5081975d-5c3d-4788-b5e1-cd21e4fa3852/loki-index-gateway/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.718392 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_8fbde296-bf61-4d05-bf29-e27b5b58c150/loki-ingester/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.870272 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-6dcbdf8bb8-jp89f_64c46410-682b-49b0-9aa2-8f223a69165b/loki-querier/0.log" Mar 18 15:39:01 crc kubenswrapper[4857]: I0318 15:39:01.949627 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-ff66c4dc9-82dsb_366a3cfc-7c2d-4212-a16d-2415868b12ba/loki-query-frontend/0.log" Mar 18 15:39:10 crc kubenswrapper[4857]: I0318 15:39:10.164611 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:39:10 crc kubenswrapper[4857]: E0318 15:39:10.165516 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:39:19 crc kubenswrapper[4857]: I0318 15:39:19.440453 4857 scope.go:117] "RemoveContainer" containerID="c8f09c73f10e410cc122138c0d441dd8a132c8d670180e50adafe6f30d5167d2" Mar 18 15:39:22 crc kubenswrapper[4857]: I0318 15:39:22.163562 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:39:22 crc kubenswrapper[4857]: E0318 15:39:22.164411 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.166613 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fjhn2_2cbcf5ed-41b1-4596-8e5d-05212018ba3b/kube-rbac-proxy/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.386491 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fjhn2_2cbcf5ed-41b1-4596-8e5d-05212018ba3b/controller/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.487048 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-wd764_75baf138-7643-4b4f-9919-88edd42aee95/frr-k8s-webhook-server/1.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.527007 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-wd764_75baf138-7643-4b4f-9919-88edd42aee95/frr-k8s-webhook-server/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.705254 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-frr-files/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.920930 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-reloader/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.921202 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-frr-files/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.925175 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-metrics/0.log" Mar 18 15:39:23 crc kubenswrapper[4857]: I0318 15:39:23.982657 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-reloader/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.554820 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-frr-files/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.562304 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-reloader/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.642569 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-metrics/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.649244 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-metrics/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.836088 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-frr-files/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.871793 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/controller/1.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.875420 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-reloader/0.log" Mar 18 15:39:24 crc kubenswrapper[4857]: I0318 15:39:24.916889 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/cp-metrics/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.078064 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/controller/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.100218 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/frr/1.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.189288 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/frr-metrics/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.202242 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/frr/2.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.358105 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/kube-rbac-proxy/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.359586 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/kube-rbac-proxy-frr/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.401605 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xtz2z_30a9ec00-16b4-4349-a2c6-a2e6397e0ce0/reloader/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.606654 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7889654c4-2jp9b_18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5/manager/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.610369 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7889654c4-2jp9b_18d750aa-73f2-4bad-9ca6-0e5cd4e4e4a5/manager/1.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.714922 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55fbd9db57-wcht9_7ae3e1fc-2002-4805-bed1-f96339dce3a0/webhook-server/1.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.844532 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55fbd9db57-wcht9_7ae3e1fc-2002-4805-bed1-f96339dce3a0/webhook-server/0.log" Mar 18 15:39:25 crc kubenswrapper[4857]: I0318 15:39:25.859094 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pm2jd_a73a34ce-a354-406b-ac7a-68b7f5aaf95b/kube-rbac-proxy/0.log" Mar 18 15:39:26 crc kubenswrapper[4857]: I0318 15:39:26.231000 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pm2jd_a73a34ce-a354-406b-ac7a-68b7f5aaf95b/speaker/1.log" Mar 18 15:39:27 crc kubenswrapper[4857]: I0318 15:39:27.133248 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pm2jd_a73a34ce-a354-406b-ac7a-68b7f5aaf95b/speaker/0.log" Mar 18 15:39:33 crc kubenswrapper[4857]: I0318 15:39:33.164282 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:39:33 crc kubenswrapper[4857]: E0318 15:39:33.165057 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.163332 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/util/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.165058 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:39:46 crc kubenswrapper[4857]: E0318 15:39:46.165514 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.322821 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/util/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.459484 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/pull/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.459654 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/pull/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.641742 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/pull/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.682132 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/util/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.697207 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8747qcxv_a1a8a67d-e6ff-4782-8f41-b2481e0b5299/extract/0.log" Mar 18 15:39:46 crc kubenswrapper[4857]: I0318 15:39:46.880286 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/util/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.480001 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/util/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.521102 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/pull/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.527381 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/pull/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.756800 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/util/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.774884 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/pull/0.log" Mar 18 15:39:47 crc kubenswrapper[4857]: I0318 15:39:47.793890 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1ksd7z_c6a38c64-42fd-4eaa-9fb9-00b9317b1fa3/extract/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.039361 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/util/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.295016 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/pull/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.302877 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/util/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.331372 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/pull/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.585742 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/util/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.591675 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/extract/0.log" Mar 18 15:39:48 crc kubenswrapper[4857]: I0318 15:39:48.625947 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_3d9a37d2dd18988fcb5ca5f4f6b82950da05d40c4031e61bc3bfef57d57xw7z_98d7117e-e25b-4325-a0d7-31bc5930fd08/pull/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.330670 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/util/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.556655 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/pull/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.565655 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/pull/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.588019 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/util/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.838099 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/util/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.867913 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/pull/0.log" Mar 18 15:39:49 crc kubenswrapper[4857]: I0318 15:39:49.876214 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4be416c5f2f0b2736478b7cfc76f1b991abd25af724ba21bdbdad2dd6cq2lzx_9f6c7144-f8b7-4b54-bd26-806157743e00/extract/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.056244 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/util/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.283275 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/pull/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.289551 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/pull/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.313319 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/util/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.676514 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/pull/0.log" Mar 18 15:39:50 crc kubenswrapper[4857]: I0318 15:39:50.688792 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/extract/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.196987 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mtr6b_a640fe72-4cc0-46a9-b835-36c8d15718ce/util/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.334685 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-utilities/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.602194 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-utilities/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.617211 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-content/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.650479 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-content/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.811103 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-utilities/0.log" Mar 18 15:39:51 crc kubenswrapper[4857]: I0318 15:39:51.822400 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/extract-content/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.068799 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-utilities/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.102726 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/registry-server/1.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.352218 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-content/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.433118 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-content/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.453222 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-utilities/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.673164 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-content/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.733617 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zl78l_155a767b-458f-42b5-86f8-f73f4d585ee0/registry-server/0.log" Mar 18 15:39:52 crc kubenswrapper[4857]: I0318 15:39:52.762914 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/extract-utilities/0.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.339089 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vc2t4_b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c/marketplace-operator/1.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.344866 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vc2t4_b5e0cb1d-5255-4ac8-8fab-07ad4c4bfd8c/marketplace-operator/0.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.396852 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/registry-server/1.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.637761 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-utilities/0.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.819103 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-content/0.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.836903 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-utilities/0.log" Mar 18 15:39:53 crc kubenswrapper[4857]: I0318 15:39:53.884868 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-content/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.068033 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-89qls_2ae58b8b-bff1-46c3-b4a9-75c496dd8fdc/registry-server/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.128261 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-content/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.174859 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/extract-utilities/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.221299 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/registry-server/1.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.329595 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f9sl8_cb7efbe1-5cfd-4ddb-a334-fae43107aafd/registry-server/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.373085 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-utilities/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.612933 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-content/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.632033 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-content/0.log" Mar 18 15:39:54 crc kubenswrapper[4857]: I0318 15:39:54.672648 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-utilities/0.log" Mar 18 15:39:55 crc kubenswrapper[4857]: I0318 15:39:55.234775 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-content/0.log" Mar 18 15:39:55 crc kubenswrapper[4857]: I0318 15:39:55.322119 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/extract-utilities/0.log" Mar 18 15:39:55 crc kubenswrapper[4857]: I0318 15:39:55.514920 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/registry-server/1.log" Mar 18 15:39:56 crc kubenswrapper[4857]: I0318 15:39:56.311537 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7qbr_bdc29e5d-a2b4-4260-bbca-f5e1e5cb4900/registry-server/0.log" Mar 18 15:39:57 crc kubenswrapper[4857]: I0318 15:39:57.451572 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:39:57 crc kubenswrapper[4857]: E0318 15:39:57.453261 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.165001 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564140-lgd8m"] Mar 18 15:40:00 crc kubenswrapper[4857]: E0318 15:40:00.166157 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a239f10e-d03f-498d-8026-504eb804ae3f" containerName="oc" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.166183 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a239f10e-d03f-498d-8026-504eb804ae3f" containerName="oc" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.166504 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a239f10e-d03f-498d-8026-504eb804ae3f" containerName="oc" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.167888 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.170974 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.171360 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.171653 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.180395 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564140-lgd8m"] Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.226391 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrdd\" (UniqueName: \"kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd\") pod \"auto-csr-approver-29564140-lgd8m\" (UID: \"9355f62b-3740-440c-8eaf-8260c8993413\") " pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.329040 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrdd\" (UniqueName: \"kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd\") pod \"auto-csr-approver-29564140-lgd8m\" (UID: \"9355f62b-3740-440c-8eaf-8260c8993413\") " pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.379302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrdd\" (UniqueName: \"kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd\") pod \"auto-csr-approver-29564140-lgd8m\" (UID: \"9355f62b-3740-440c-8eaf-8260c8993413\") " pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:00 crc kubenswrapper[4857]: I0318 15:40:00.505023 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:01 crc kubenswrapper[4857]: I0318 15:40:01.406398 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564140-lgd8m"] Mar 18 15:40:01 crc kubenswrapper[4857]: I0318 15:40:01.815342 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" event={"ID":"9355f62b-3740-440c-8eaf-8260c8993413","Type":"ContainerStarted","Data":"da074cca2aa98a8564bcbb29b64ffd13597dc0fc765c2b435d717d6e853193a6"} Mar 18 15:40:04 crc kubenswrapper[4857]: I0318 15:40:04.876211 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" event={"ID":"9355f62b-3740-440c-8eaf-8260c8993413","Type":"ContainerStarted","Data":"48bad40d2ff8647bcbf6a6d60ee5f0e8c974b3b960a433a098f071969ab79d21"} Mar 18 15:40:04 crc kubenswrapper[4857]: I0318 15:40:04.893600 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" podStartSLOduration=2.496708549 podStartE2EDuration="4.893572762s" podCreationTimestamp="2026-03-18 15:40:00 +0000 UTC" firstStartedPulling="2026-03-18 15:40:01.388062945 +0000 UTC m=+5985.517191412" lastFinishedPulling="2026-03-18 15:40:03.784927158 +0000 UTC m=+5987.914055625" observedRunningTime="2026-03-18 15:40:04.892240768 +0000 UTC m=+5989.021369235" watchObservedRunningTime="2026-03-18 15:40:04.893572762 +0000 UTC m=+5989.022701219" Mar 18 15:40:06 crc kubenswrapper[4857]: I0318 15:40:06.903615 4857 generic.go:334] "Generic (PLEG): container finished" podID="9355f62b-3740-440c-8eaf-8260c8993413" containerID="48bad40d2ff8647bcbf6a6d60ee5f0e8c974b3b960a433a098f071969ab79d21" exitCode=0 Mar 18 15:40:06 crc kubenswrapper[4857]: I0318 15:40:06.903687 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" event={"ID":"9355f62b-3740-440c-8eaf-8260c8993413","Type":"ContainerDied","Data":"48bad40d2ff8647bcbf6a6d60ee5f0e8c974b3b960a433a098f071969ab79d21"} Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.711276 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.888008 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrdd\" (UniqueName: \"kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd\") pod \"9355f62b-3740-440c-8eaf-8260c8993413\" (UID: \"9355f62b-3740-440c-8eaf-8260c8993413\") " Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.908236 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd" (OuterVolumeSpecName: "kube-api-access-tvrdd") pod "9355f62b-3740-440c-8eaf-8260c8993413" (UID: "9355f62b-3740-440c-8eaf-8260c8993413"). InnerVolumeSpecName "kube-api-access-tvrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.933601 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" event={"ID":"9355f62b-3740-440c-8eaf-8260c8993413","Type":"ContainerDied","Data":"da074cca2aa98a8564bcbb29b64ffd13597dc0fc765c2b435d717d6e853193a6"} Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.933708 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da074cca2aa98a8564bcbb29b64ffd13597dc0fc765c2b435d717d6e853193a6" Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.933974 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564140-lgd8m" Mar 18 15:40:08 crc kubenswrapper[4857]: I0318 15:40:08.991916 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrdd\" (UniqueName: \"kubernetes.io/projected/9355f62b-3740-440c-8eaf-8260c8993413-kube-api-access-tvrdd\") on node \"crc\" DevicePath \"\"" Mar 18 15:40:09 crc kubenswrapper[4857]: I0318 15:40:09.013492 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564134-tqxnb"] Mar 18 15:40:09 crc kubenswrapper[4857]: I0318 15:40:09.026454 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564134-tqxnb"] Mar 18 15:40:09 crc kubenswrapper[4857]: I0318 15:40:09.179893 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397821a2-be75-4f0d-a83f-f61eb459c9cb" path="/var/lib/kubelet/pods/397821a2-be75-4f0d-a83f-f61eb459c9cb/volumes" Mar 18 15:40:10 crc kubenswrapper[4857]: I0318 15:40:10.164180 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:40:10 crc kubenswrapper[4857]: E0318 15:40:10.164827 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:40:15 crc kubenswrapper[4857]: I0318 15:40:15.914197 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-lsp5b_501dc1bd-0a04-4aef-bff8-43c9e767215f/prometheus-operator/0.log" Mar 18 15:40:15 crc kubenswrapper[4857]: I0318 15:40:15.945338 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf4dcf5f6-9gzrh_518b89ef-5060-4ec2-9a2d-7c64fa3555a5/prometheus-operator-admission-webhook/0.log" Mar 18 15:40:15 crc kubenswrapper[4857]: I0318 15:40:15.991042 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bf4dcf5f6-p48fj_ec9235b6-6bfb-45f2-8cef-5deb5c0c2e47/prometheus-operator-admission-webhook/0.log" Mar 18 15:40:16 crc kubenswrapper[4857]: I0318 15:40:16.163873 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-5mw69_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59/operator/1.log" Mar 18 15:40:16 crc kubenswrapper[4857]: I0318 15:40:16.190338 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6c9d87fc97-ddtxj_79d3df2c-25f0-4e16-a39d-cc0d6a85277f/perses-operator/0.log" Mar 18 15:40:16 crc kubenswrapper[4857]: I0318 15:40:16.208986 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7f87b9b85b-lwdf5_6e52a810-35c4-49bb-a0f6-83accdb52311/observability-ui-dashboards/0.log" Mar 18 15:40:16 crc kubenswrapper[4857]: I0318 15:40:16.218062 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-5mw69_264f3d7a-0c38-4d0a-9ff7-4f3a24164f59/operator/0.log" Mar 18 15:40:19 crc kubenswrapper[4857]: I0318 15:40:19.648420 4857 scope.go:117] "RemoveContainer" containerID="5e6269276411348d1e0ea381ebd471d107e3d9ea3fd40c71631feabb31e05f98" Mar 18 15:40:22 crc kubenswrapper[4857]: I0318 15:40:22.164456 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:40:22 crc kubenswrapper[4857]: E0318 15:40:22.165273 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.220425 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:40:30 crc kubenswrapper[4857]: E0318 15:40:30.223143 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9355f62b-3740-440c-8eaf-8260c8993413" containerName="oc" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.223263 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="9355f62b-3740-440c-8eaf-8260c8993413" containerName="oc" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.223771 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="9355f62b-3740-440c-8eaf-8260c8993413" containerName="oc" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.230948 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.250146 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.388052 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.388132 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvsk\" (UniqueName: \"kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.388236 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.491411 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.492053 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvsk\" (UniqueName: \"kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.492170 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.493094 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.493583 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.514815 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvsk\" (UniqueName: \"kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk\") pod \"community-operators-5v2hx\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:30 crc kubenswrapper[4857]: I0318 15:40:30.575226 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:31 crc kubenswrapper[4857]: I0318 15:40:31.206405 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:40:31 crc kubenswrapper[4857]: I0318 15:40:31.657645 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerStarted","Data":"5d4080a43ce42d8f02a30f1afaf1ffec90cf27b37b3a998e7c20b51350bebef2"} Mar 18 15:40:32 crc kubenswrapper[4857]: I0318 15:40:32.675537 4857 generic.go:334] "Generic (PLEG): container finished" podID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerID="1e4782464f3a1aa9346cc8c0c1327e34beccc10eb294f3d723ab6ac25076d554" exitCode=0 Mar 18 15:40:32 crc kubenswrapper[4857]: I0318 15:40:32.675595 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerDied","Data":"1e4782464f3a1aa9346cc8c0c1327e34beccc10eb294f3d723ab6ac25076d554"} Mar 18 15:40:34 crc kubenswrapper[4857]: I0318 15:40:34.776211 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerStarted","Data":"5dcd859b7c069afa2da62cd4eda077e1109e70b3ad051048fb3bdc5e1af771ae"} Mar 18 15:40:35 crc kubenswrapper[4857]: I0318 15:40:35.164165 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:40:35 crc kubenswrapper[4857]: E0318 15:40:35.164703 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:40:37 crc kubenswrapper[4857]: I0318 15:40:37.819120 4857 generic.go:334] "Generic (PLEG): container finished" podID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerID="5dcd859b7c069afa2da62cd4eda077e1109e70b3ad051048fb3bdc5e1af771ae" exitCode=0 Mar 18 15:40:37 crc kubenswrapper[4857]: I0318 15:40:37.819217 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerDied","Data":"5dcd859b7c069afa2da62cd4eda077e1109e70b3ad051048fb3bdc5e1af771ae"} Mar 18 15:40:39 crc kubenswrapper[4857]: I0318 15:40:39.182516 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-86c8cb9b45-kxpht_e5ba6b5a-524d-488a-9435-5fea2c394e6a/kube-rbac-proxy/0.log" Mar 18 15:40:39 crc kubenswrapper[4857]: I0318 15:40:39.294446 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-86c8cb9b45-kxpht_e5ba6b5a-524d-488a-9435-5fea2c394e6a/manager/0.log" Mar 18 15:40:39 crc kubenswrapper[4857]: I0318 15:40:39.843871 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerStarted","Data":"82f02a61febd41e31b30b04e1b7af562f8f448858bb4a5246790f54bdb0323da"} Mar 18 15:40:39 crc kubenswrapper[4857]: I0318 15:40:39.869995 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5v2hx" podStartSLOduration=3.828069958 podStartE2EDuration="9.869968693s" podCreationTimestamp="2026-03-18 15:40:30 +0000 UTC" firstStartedPulling="2026-03-18 15:40:32.679682971 +0000 UTC m=+6016.808811418" lastFinishedPulling="2026-03-18 15:40:38.721581696 +0000 UTC m=+6022.850710153" observedRunningTime="2026-03-18 15:40:39.861651234 +0000 UTC m=+6023.990779691" watchObservedRunningTime="2026-03-18 15:40:39.869968693 +0000 UTC m=+6023.999097150" Mar 18 15:40:40 crc kubenswrapper[4857]: I0318 15:40:40.578985 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:40 crc kubenswrapper[4857]: I0318 15:40:40.579044 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:40:41 crc kubenswrapper[4857]: I0318 15:40:41.656963 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5v2hx" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" probeResult="failure" output=< Mar 18 15:40:41 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:40:41 crc kubenswrapper[4857]: > Mar 18 15:40:46 crc kubenswrapper[4857]: I0318 15:40:46.164075 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:40:46 crc kubenswrapper[4857]: E0318 15:40:46.164943 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:40:51 crc kubenswrapper[4857]: I0318 15:40:51.638791 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5v2hx" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" probeResult="failure" output=< Mar 18 15:40:51 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:40:51 crc kubenswrapper[4857]: > Mar 18 15:41:00 crc kubenswrapper[4857]: I0318 15:41:00.652967 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:41:00 crc kubenswrapper[4857]: I0318 15:41:00.764785 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:41:00 crc kubenswrapper[4857]: I0318 15:41:00.914915 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:41:01 crc kubenswrapper[4857]: I0318 15:41:01.164543 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:41:01 crc kubenswrapper[4857]: E0318 15:41:01.165044 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:41:02 crc kubenswrapper[4857]: I0318 15:41:02.593132 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5v2hx" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" containerID="cri-o://82f02a61febd41e31b30b04e1b7af562f8f448858bb4a5246790f54bdb0323da" gracePeriod=2 Mar 18 15:41:03 crc kubenswrapper[4857]: I0318 15:41:03.615498 4857 generic.go:334] "Generic (PLEG): container finished" podID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerID="82f02a61febd41e31b30b04e1b7af562f8f448858bb4a5246790f54bdb0323da" exitCode=0 Mar 18 15:41:03 crc kubenswrapper[4857]: I0318 15:41:03.615592 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerDied","Data":"82f02a61febd41e31b30b04e1b7af562f8f448858bb4a5246790f54bdb0323da"} Mar 18 15:41:04 crc kubenswrapper[4857]: E0318 15:41:04.852132 4857 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.89:48820->38.102.83.89:44309: write tcp 38.102.83.89:48820->38.102.83.89:44309: write: broken pipe Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.062190 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.062952 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5v2hx" event={"ID":"33aad116-edb1-49a1-83bb-b07ce60e77d8","Type":"ContainerDied","Data":"5d4080a43ce42d8f02a30f1afaf1ffec90cf27b37b3a998e7c20b51350bebef2"} Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.064576 4857 scope.go:117] "RemoveContainer" containerID="82f02a61febd41e31b30b04e1b7af562f8f448858bb4a5246790f54bdb0323da" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.124911 4857 scope.go:117] "RemoveContainer" containerID="5dcd859b7c069afa2da62cd4eda077e1109e70b3ad051048fb3bdc5e1af771ae" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.173167 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvsk\" (UniqueName: \"kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk\") pod \"33aad116-edb1-49a1-83bb-b07ce60e77d8\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.173266 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities\") pod \"33aad116-edb1-49a1-83bb-b07ce60e77d8\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.173414 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content\") pod \"33aad116-edb1-49a1-83bb-b07ce60e77d8\" (UID: \"33aad116-edb1-49a1-83bb-b07ce60e77d8\") " Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.176911 4857 scope.go:117] "RemoveContainer" containerID="1e4782464f3a1aa9346cc8c0c1327e34beccc10eb294f3d723ab6ac25076d554" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.178399 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities" (OuterVolumeSpecName: "utilities") pod "33aad116-edb1-49a1-83bb-b07ce60e77d8" (UID: "33aad116-edb1-49a1-83bb-b07ce60e77d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.193062 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk" (OuterVolumeSpecName: "kube-api-access-ddvsk") pod "33aad116-edb1-49a1-83bb-b07ce60e77d8" (UID: "33aad116-edb1-49a1-83bb-b07ce60e77d8"). InnerVolumeSpecName "kube-api-access-ddvsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.278133 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvsk\" (UniqueName: \"kubernetes.io/projected/33aad116-edb1-49a1-83bb-b07ce60e77d8-kube-api-access-ddvsk\") on node \"crc\" DevicePath \"\"" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.278207 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.422280 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33aad116-edb1-49a1-83bb-b07ce60e77d8" (UID: "33aad116-edb1-49a1-83bb-b07ce60e77d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:41:06 crc kubenswrapper[4857]: I0318 15:41:06.495500 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33aad116-edb1-49a1-83bb-b07ce60e77d8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:41:07 crc kubenswrapper[4857]: I0318 15:41:07.090435 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5v2hx" Mar 18 15:41:07 crc kubenswrapper[4857]: I0318 15:41:07.149291 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:41:07 crc kubenswrapper[4857]: I0318 15:41:07.206169 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5v2hx"] Mar 18 15:41:09 crc kubenswrapper[4857]: I0318 15:41:09.655514 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" path="/var/lib/kubelet/pods/33aad116-edb1-49a1-83bb-b07ce60e77d8/volumes" Mar 18 15:41:12 crc kubenswrapper[4857]: I0318 15:41:12.164427 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:41:12 crc kubenswrapper[4857]: E0318 15:41:12.165094 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:41:26 crc kubenswrapper[4857]: I0318 15:41:26.379170 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:41:26 crc kubenswrapper[4857]: E0318 15:41:26.380114 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:41:38 crc kubenswrapper[4857]: I0318 15:41:38.163924 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:41:38 crc kubenswrapper[4857]: E0318 15:41:38.164902 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:41:49 crc kubenswrapper[4857]: I0318 15:41:49.087004 4857 trace.go:236] Trace[2104101446]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (18-Mar-2026 15:41:47.936) (total time: 1149ms): Mar 18 15:41:49 crc kubenswrapper[4857]: Trace[2104101446]: [1.149080375s] [1.149080375s] END Mar 18 15:41:52 crc kubenswrapper[4857]: I0318 15:41:52.164062 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:41:52 crc kubenswrapper[4857]: E0318 15:41:52.165316 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.181050 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564142-8wd5z"] Mar 18 15:42:00 crc kubenswrapper[4857]: E0318 15:42:00.182631 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="extract-content" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.182669 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="extract-content" Mar 18 15:42:00 crc kubenswrapper[4857]: E0318 15:42:00.182704 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.182717 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" Mar 18 15:42:00 crc kubenswrapper[4857]: E0318 15:42:00.182832 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="extract-utilities" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.182847 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="extract-utilities" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.183379 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="33aad116-edb1-49a1-83bb-b07ce60e77d8" containerName="registry-server" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.185006 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.188101 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.188240 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.188334 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.204855 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564142-8wd5z"] Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.258342 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dw82\" (UniqueName: \"kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82\") pod \"auto-csr-approver-29564142-8wd5z\" (UID: \"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1\") " pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.361498 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dw82\" (UniqueName: \"kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82\") pod \"auto-csr-approver-29564142-8wd5z\" (UID: \"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1\") " pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.384124 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dw82\" (UniqueName: \"kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82\") pod \"auto-csr-approver-29564142-8wd5z\" (UID: \"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1\") " pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:00 crc kubenswrapper[4857]: I0318 15:42:00.526816 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:01 crc kubenswrapper[4857]: I0318 15:42:01.145188 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564142-8wd5z"] Mar 18 15:42:01 crc kubenswrapper[4857]: I0318 15:42:01.781825 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" event={"ID":"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1","Type":"ContainerStarted","Data":"75036fa8a24a1d0c68436a5a43680b0cc79aae346e155be080715791ef15656b"} Mar 18 15:42:04 crc kubenswrapper[4857]: I0318 15:42:04.818859 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" event={"ID":"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1","Type":"ContainerStarted","Data":"bf0686be1b537d31ab67e35c12e566cd7b380ac5775fec71cd97be2785eadadd"} Mar 18 15:42:04 crc kubenswrapper[4857]: I0318 15:42:04.837793 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" podStartSLOduration=3.31288084 podStartE2EDuration="4.83773696s" podCreationTimestamp="2026-03-18 15:42:00 +0000 UTC" firstStartedPulling="2026-03-18 15:42:01.484292085 +0000 UTC m=+6105.613420552" lastFinishedPulling="2026-03-18 15:42:03.009148215 +0000 UTC m=+6107.138276672" observedRunningTime="2026-03-18 15:42:04.832511239 +0000 UTC m=+6108.961639696" watchObservedRunningTime="2026-03-18 15:42:04.83773696 +0000 UTC m=+6108.966865417" Mar 18 15:42:05 crc kubenswrapper[4857]: I0318 15:42:05.849358 4857 generic.go:334] "Generic (PLEG): container finished" podID="5e4156dc-bd6d-48e2-9e32-dbb950ce01e1" containerID="bf0686be1b537d31ab67e35c12e566cd7b380ac5775fec71cd97be2785eadadd" exitCode=0 Mar 18 15:42:05 crc kubenswrapper[4857]: I0318 15:42:05.849416 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" event={"ID":"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1","Type":"ContainerDied","Data":"bf0686be1b537d31ab67e35c12e566cd7b380ac5775fec71cd97be2785eadadd"} Mar 18 15:42:06 crc kubenswrapper[4857]: I0318 15:42:06.165965 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:42:06 crc kubenswrapper[4857]: E0318 15:42:06.166631 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.373308 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.456538 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dw82\" (UniqueName: \"kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82\") pod \"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1\" (UID: \"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1\") " Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.463506 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82" (OuterVolumeSpecName: "kube-api-access-9dw82") pod "5e4156dc-bd6d-48e2-9e32-dbb950ce01e1" (UID: "5e4156dc-bd6d-48e2-9e32-dbb950ce01e1"). InnerVolumeSpecName "kube-api-access-9dw82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.560179 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dw82\" (UniqueName: \"kubernetes.io/projected/5e4156dc-bd6d-48e2-9e32-dbb950ce01e1-kube-api-access-9dw82\") on node \"crc\" DevicePath \"\"" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.886074 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" event={"ID":"5e4156dc-bd6d-48e2-9e32-dbb950ce01e1","Type":"ContainerDied","Data":"75036fa8a24a1d0c68436a5a43680b0cc79aae346e155be080715791ef15656b"} Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.886147 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75036fa8a24a1d0c68436a5a43680b0cc79aae346e155be080715791ef15656b" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.886234 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564142-8wd5z" Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.971582 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564136-tnfpb"] Mar 18 15:42:07 crc kubenswrapper[4857]: I0318 15:42:07.986019 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564136-tnfpb"] Mar 18 15:42:09 crc kubenswrapper[4857]: I0318 15:42:09.188991 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9fdabc-fe50-430c-a8fb-b376cd3a31e9" path="/var/lib/kubelet/pods/9b9fdabc-fe50-430c-a8fb-b376cd3a31e9/volumes" Mar 18 15:42:17 crc kubenswrapper[4857]: I0318 15:42:17.198821 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:42:17 crc kubenswrapper[4857]: E0318 15:42:17.201555 4857 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sjqg6_openshift-machine-config-operator(b115eb6c-2a12-4d60-b269-911a639d8eb1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" Mar 18 15:42:19 crc kubenswrapper[4857]: I0318 15:42:19.913076 4857 scope.go:117] "RemoveContainer" containerID="4e0bc1f0dc40fdd9a215c26c2bde303e9b4af474a748823a5c10593cf5d5e626" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.472352 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:42:26 crc kubenswrapper[4857]: E0318 15:42:26.473832 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4156dc-bd6d-48e2-9e32-dbb950ce01e1" containerName="oc" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.473853 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4156dc-bd6d-48e2-9e32-dbb950ce01e1" containerName="oc" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.474153 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4156dc-bd6d-48e2-9e32-dbb950ce01e1" containerName="oc" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.476573 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.486606 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.570252 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.570322 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm2hb\" (UniqueName: \"kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.570847 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.673190 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.673480 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.673526 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm2hb\" (UniqueName: \"kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.674871 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.674691 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.700012 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm2hb\" (UniqueName: \"kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb\") pod \"redhat-operators-z29h5\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:26 crc kubenswrapper[4857]: I0318 15:42:26.810839 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:27 crc kubenswrapper[4857]: I0318 15:42:27.302846 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:42:28 crc kubenswrapper[4857]: I0318 15:42:28.199557 4857 generic.go:334] "Generic (PLEG): container finished" podID="8e927200-a128-4f8f-a268-9481ba71b710" containerID="23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf" exitCode=0 Mar 18 15:42:28 crc kubenswrapper[4857]: I0318 15:42:28.199643 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerDied","Data":"23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf"} Mar 18 15:42:28 crc kubenswrapper[4857]: I0318 15:42:28.200112 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerStarted","Data":"a010f7a6618b43e60d4c15737367d865040ff08d2c365e87882a5cf027b60e50"} Mar 18 15:42:29 crc kubenswrapper[4857]: I0318 15:42:29.165510 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:42:30 crc kubenswrapper[4857]: I0318 15:42:30.255612 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"b83bd0d190b74d8409c4dbeaad53f50cbd1448eb39e1cfc8e59cef41ac7743b5"} Mar 18 15:42:30 crc kubenswrapper[4857]: I0318 15:42:30.258732 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerStarted","Data":"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e"} Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.856481 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.860543 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.871698 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.924987 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.925081 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:31 crc kubenswrapper[4857]: I0318 15:42:31.925340 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnbn\" (UniqueName: \"kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.027846 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnbn\" (UniqueName: \"kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.028336 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.028390 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.029008 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.029025 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.048938 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnbn\" (UniqueName: \"kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn\") pod \"redhat-marketplace-sztj5\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.198953 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:32 crc kubenswrapper[4857]: I0318 15:42:32.810441 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:33 crc kubenswrapper[4857]: I0318 15:42:33.303093 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerStarted","Data":"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302"} Mar 18 15:42:33 crc kubenswrapper[4857]: I0318 15:42:33.303485 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerStarted","Data":"756023bd558f1e3cc8e52ae59a1bd74ae1482d790ed125e1e9df5e373da06428"} Mar 18 15:42:34 crc kubenswrapper[4857]: I0318 15:42:34.321258 4857 generic.go:334] "Generic (PLEG): container finished" podID="99035b43-7686-419a-b3d4-b849f9de7b93" containerID="41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302" exitCode=0 Mar 18 15:42:34 crc kubenswrapper[4857]: I0318 15:42:34.321339 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerDied","Data":"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302"} Mar 18 15:42:37 crc kubenswrapper[4857]: I0318 15:42:37.363779 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerStarted","Data":"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5"} Mar 18 15:42:37 crc kubenswrapper[4857]: I0318 15:42:37.369013 4857 generic.go:334] "Generic (PLEG): container finished" podID="8e927200-a128-4f8f-a268-9481ba71b710" containerID="68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e" exitCode=0 Mar 18 15:42:37 crc kubenswrapper[4857]: I0318 15:42:37.369048 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerDied","Data":"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e"} Mar 18 15:42:39 crc kubenswrapper[4857]: I0318 15:42:39.410168 4857 generic.go:334] "Generic (PLEG): container finished" podID="99035b43-7686-419a-b3d4-b849f9de7b93" containerID="6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5" exitCode=0 Mar 18 15:42:39 crc kubenswrapper[4857]: I0318 15:42:39.410362 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerDied","Data":"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5"} Mar 18 15:42:39 crc kubenswrapper[4857]: I0318 15:42:39.420388 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerStarted","Data":"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82"} Mar 18 15:42:39 crc kubenswrapper[4857]: I0318 15:42:39.469848 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z29h5" podStartSLOduration=3.692068675 podStartE2EDuration="13.469813408s" podCreationTimestamp="2026-03-18 15:42:26 +0000 UTC" firstStartedPulling="2026-03-18 15:42:28.202160083 +0000 UTC m=+6132.331288550" lastFinishedPulling="2026-03-18 15:42:37.979904806 +0000 UTC m=+6142.109033283" observedRunningTime="2026-03-18 15:42:39.463825738 +0000 UTC m=+6143.592954195" watchObservedRunningTime="2026-03-18 15:42:39.469813408 +0000 UTC m=+6143.598941895" Mar 18 15:42:40 crc kubenswrapper[4857]: I0318 15:42:40.436916 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerStarted","Data":"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad"} Mar 18 15:42:40 crc kubenswrapper[4857]: I0318 15:42:40.465000 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sztj5" podStartSLOduration=3.871620613 podStartE2EDuration="9.464977643s" podCreationTimestamp="2026-03-18 15:42:31 +0000 UTC" firstStartedPulling="2026-03-18 15:42:34.326343806 +0000 UTC m=+6138.455472263" lastFinishedPulling="2026-03-18 15:42:39.919700836 +0000 UTC m=+6144.048829293" observedRunningTime="2026-03-18 15:42:40.459065515 +0000 UTC m=+6144.588193982" watchObservedRunningTime="2026-03-18 15:42:40.464977643 +0000 UTC m=+6144.594106100" Mar 18 15:42:42 crc kubenswrapper[4857]: I0318 15:42:42.199345 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:42 crc kubenswrapper[4857]: I0318 15:42:42.200496 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:43 crc kubenswrapper[4857]: I0318 15:42:43.269455 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sztj5" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="registry-server" probeResult="failure" output=< Mar 18 15:42:43 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:42:43 crc kubenswrapper[4857]: > Mar 18 15:42:46 crc kubenswrapper[4857]: I0318 15:42:46.812461 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:46 crc kubenswrapper[4857]: I0318 15:42:46.814343 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:42:47 crc kubenswrapper[4857]: I0318 15:42:47.876276 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z29h5" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" probeResult="failure" output=< Mar 18 15:42:47 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:42:47 crc kubenswrapper[4857]: > Mar 18 15:42:52 crc kubenswrapper[4857]: I0318 15:42:52.539207 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:52 crc kubenswrapper[4857]: I0318 15:42:52.606294 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:52 crc kubenswrapper[4857]: I0318 15:42:52.815254 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:53 crc kubenswrapper[4857]: I0318 15:42:53.653092 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sztj5" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="registry-server" containerID="cri-o://0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad" gracePeriod=2 Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.253080 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.311913 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities\") pod \"99035b43-7686-419a-b3d4-b849f9de7b93\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.312393 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content\") pod \"99035b43-7686-419a-b3d4-b849f9de7b93\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.312599 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnbn\" (UniqueName: \"kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn\") pod \"99035b43-7686-419a-b3d4-b849f9de7b93\" (UID: \"99035b43-7686-419a-b3d4-b849f9de7b93\") " Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.313435 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities" (OuterVolumeSpecName: "utilities") pod "99035b43-7686-419a-b3d4-b849f9de7b93" (UID: "99035b43-7686-419a-b3d4-b849f9de7b93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.319806 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn" (OuterVolumeSpecName: "kube-api-access-6vnbn") pod "99035b43-7686-419a-b3d4-b849f9de7b93" (UID: "99035b43-7686-419a-b3d4-b849f9de7b93"). InnerVolumeSpecName "kube-api-access-6vnbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.342188 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99035b43-7686-419a-b3d4-b849f9de7b93" (UID: "99035b43-7686-419a-b3d4-b849f9de7b93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.416510 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnbn\" (UniqueName: \"kubernetes.io/projected/99035b43-7686-419a-b3d4-b849f9de7b93-kube-api-access-6vnbn\") on node \"crc\" DevicePath \"\"" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.416895 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.417079 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99035b43-7686-419a-b3d4-b849f9de7b93-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.666008 4857 generic.go:334] "Generic (PLEG): container finished" podID="99035b43-7686-419a-b3d4-b849f9de7b93" containerID="0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad" exitCode=0 Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.666091 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sztj5" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.666080 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerDied","Data":"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad"} Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.667312 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sztj5" event={"ID":"99035b43-7686-419a-b3d4-b849f9de7b93","Type":"ContainerDied","Data":"756023bd558f1e3cc8e52ae59a1bd74ae1482d790ed125e1e9df5e373da06428"} Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.667347 4857 scope.go:117] "RemoveContainer" containerID="0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.694843 4857 scope.go:117] "RemoveContainer" containerID="6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.731723 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.734948 4857 scope.go:117] "RemoveContainer" containerID="41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.749046 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sztj5"] Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.809494 4857 scope.go:117] "RemoveContainer" containerID="0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad" Mar 18 15:42:54 crc kubenswrapper[4857]: E0318 15:42:54.811592 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad\": container with ID starting with 0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad not found: ID does not exist" containerID="0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.811646 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad"} err="failed to get container status \"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad\": rpc error: code = NotFound desc = could not find container \"0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad\": container with ID starting with 0e27d3e33433dd0e22228e14056b5f7ae99baa19738b4a4043e8cb672b3ef2ad not found: ID does not exist" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.811679 4857 scope.go:117] "RemoveContainer" containerID="6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5" Mar 18 15:42:54 crc kubenswrapper[4857]: E0318 15:42:54.812181 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5\": container with ID starting with 6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5 not found: ID does not exist" containerID="6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.812211 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5"} err="failed to get container status \"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5\": rpc error: code = NotFound desc = could not find container \"6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5\": container with ID starting with 6a1e2aeabb402522bbe947d858e4f8a69d808664dadcce16c31b83830f2afcb5 not found: ID does not exist" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.812230 4857 scope.go:117] "RemoveContainer" containerID="41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302" Mar 18 15:42:54 crc kubenswrapper[4857]: E0318 15:42:54.812682 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302\": container with ID starting with 41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302 not found: ID does not exist" containerID="41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302" Mar 18 15:42:54 crc kubenswrapper[4857]: I0318 15:42:54.812712 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302"} err="failed to get container status \"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302\": rpc error: code = NotFound desc = could not find container \"41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302\": container with ID starting with 41fec18ba3b9074d23e14f4b85af24491125b6218dd8a84c6284a4330fc7c302 not found: ID does not exist" Mar 18 15:42:55 crc kubenswrapper[4857]: I0318 15:42:55.180450 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" path="/var/lib/kubelet/pods/99035b43-7686-419a-b3d4-b849f9de7b93/volumes" Mar 18 15:42:57 crc kubenswrapper[4857]: I0318 15:42:57.862284 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z29h5" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" probeResult="failure" output=< Mar 18 15:42:57 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:42:57 crc kubenswrapper[4857]: > Mar 18 15:43:07 crc kubenswrapper[4857]: I0318 15:43:07.869716 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z29h5" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" probeResult="failure" output=< Mar 18 15:43:07 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:43:07 crc kubenswrapper[4857]: > Mar 18 15:43:11 crc kubenswrapper[4857]: I0318 15:43:11.908917 4857 generic.go:334] "Generic (PLEG): container finished" podID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerID="5461296be121f426841e9d8e246dc400addb8ff017f52665513b09a9b3199d4d" exitCode=0 Mar 18 15:43:11 crc kubenswrapper[4857]: I0318 15:43:11.909039 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2s484/must-gather-h8gst" event={"ID":"a85e0a42-0de7-4c3e-959f-3b16528da79c","Type":"ContainerDied","Data":"5461296be121f426841e9d8e246dc400addb8ff017f52665513b09a9b3199d4d"} Mar 18 15:43:11 crc kubenswrapper[4857]: I0318 15:43:11.911450 4857 scope.go:117] "RemoveContainer" containerID="5461296be121f426841e9d8e246dc400addb8ff017f52665513b09a9b3199d4d" Mar 18 15:43:12 crc kubenswrapper[4857]: I0318 15:43:12.282827 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2s484_must-gather-h8gst_a85e0a42-0de7-4c3e-959f-3b16528da79c/gather/0.log" Mar 18 15:43:17 crc kubenswrapper[4857]: I0318 15:43:17.962000 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z29h5" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" probeResult="failure" output=< Mar 18 15:43:17 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:43:17 crc kubenswrapper[4857]: > Mar 18 15:43:24 crc kubenswrapper[4857]: I0318 15:43:24.963929 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2s484/must-gather-h8gst"] Mar 18 15:43:24 crc kubenswrapper[4857]: I0318 15:43:24.964887 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2s484/must-gather-h8gst" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="copy" containerID="cri-o://b8cce18dc3defd2b352b99e57e508af0e55fea62ae140f3a0aa913672fe5193e" gracePeriod=2 Mar 18 15:43:24 crc kubenswrapper[4857]: I0318 15:43:24.991168 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2s484/must-gather-h8gst"] Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.143337 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2s484_must-gather-h8gst_a85e0a42-0de7-4c3e-959f-3b16528da79c/copy/0.log" Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.144207 4857 generic.go:334] "Generic (PLEG): container finished" podID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerID="b8cce18dc3defd2b352b99e57e508af0e55fea62ae140f3a0aa913672fe5193e" exitCode=143 Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.508081 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2s484_must-gather-h8gst_a85e0a42-0de7-4c3e-959f-3b16528da79c/copy/0.log" Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.508784 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.699200 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output\") pod \"a85e0a42-0de7-4c3e-959f-3b16528da79c\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.699465 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z9fw\" (UniqueName: \"kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw\") pod \"a85e0a42-0de7-4c3e-959f-3b16528da79c\" (UID: \"a85e0a42-0de7-4c3e-959f-3b16528da79c\") " Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.708703 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw" (OuterVolumeSpecName: "kube-api-access-5z9fw") pod "a85e0a42-0de7-4c3e-959f-3b16528da79c" (UID: "a85e0a42-0de7-4c3e-959f-3b16528da79c"). InnerVolumeSpecName "kube-api-access-5z9fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.803035 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z9fw\" (UniqueName: \"kubernetes.io/projected/a85e0a42-0de7-4c3e-959f-3b16528da79c-kube-api-access-5z9fw\") on node \"crc\" DevicePath \"\"" Mar 18 15:43:25 crc kubenswrapper[4857]: I0318 15:43:25.908364 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a85e0a42-0de7-4c3e-959f-3b16528da79c" (UID: "a85e0a42-0de7-4c3e-959f-3b16528da79c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.076089 4857 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a85e0a42-0de7-4c3e-959f-3b16528da79c-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.158507 4857 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2s484_must-gather-h8gst_a85e0a42-0de7-4c3e-959f-3b16528da79c/copy/0.log" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.159583 4857 scope.go:117] "RemoveContainer" containerID="b8cce18dc3defd2b352b99e57e508af0e55fea62ae140f3a0aa913672fe5193e" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.159613 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2s484/must-gather-h8gst" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.192907 4857 scope.go:117] "RemoveContainer" containerID="5461296be121f426841e9d8e246dc400addb8ff017f52665513b09a9b3199d4d" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.880047 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:43:26 crc kubenswrapper[4857]: I0318 15:43:26.939804 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:43:27 crc kubenswrapper[4857]: I0318 15:43:27.236134 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" path="/var/lib/kubelet/pods/a85e0a42-0de7-4c3e-959f-3b16528da79c/volumes" Mar 18 15:43:27 crc kubenswrapper[4857]: I0318 15:43:27.237238 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:43:28 crc kubenswrapper[4857]: I0318 15:43:28.263225 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z29h5" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" containerID="cri-o://4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82" gracePeriod=2 Mar 18 15:43:28 crc kubenswrapper[4857]: I0318 15:43:28.841687 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.232591 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm2hb\" (UniqueName: \"kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb\") pod \"8e927200-a128-4f8f-a268-9481ba71b710\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.232646 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content\") pod \"8e927200-a128-4f8f-a268-9481ba71b710\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.232766 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities\") pod \"8e927200-a128-4f8f-a268-9481ba71b710\" (UID: \"8e927200-a128-4f8f-a268-9481ba71b710\") " Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.234220 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities" (OuterVolumeSpecName: "utilities") pod "8e927200-a128-4f8f-a268-9481ba71b710" (UID: "8e927200-a128-4f8f-a268-9481ba71b710"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.291133 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb" (OuterVolumeSpecName: "kube-api-access-jm2hb") pod "8e927200-a128-4f8f-a268-9481ba71b710" (UID: "8e927200-a128-4f8f-a268-9481ba71b710"). InnerVolumeSpecName "kube-api-access-jm2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.343610 4857 generic.go:334] "Generic (PLEG): container finished" podID="8e927200-a128-4f8f-a268-9481ba71b710" containerID="4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82" exitCode=0 Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.343670 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerDied","Data":"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82"} Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.343705 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z29h5" event={"ID":"8e927200-a128-4f8f-a268-9481ba71b710","Type":"ContainerDied","Data":"a010f7a6618b43e60d4c15737367d865040ff08d2c365e87882a5cf027b60e50"} Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.343738 4857 scope.go:117] "RemoveContainer" containerID="4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.344037 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z29h5" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.349011 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm2hb\" (UniqueName: \"kubernetes.io/projected/8e927200-a128-4f8f-a268-9481ba71b710-kube-api-access-jm2hb\") on node \"crc\" DevicePath \"\"" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.349044 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.385001 4857 scope.go:117] "RemoveContainer" containerID="68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.407929 4857 scope.go:117] "RemoveContainer" containerID="23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.451708 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e927200-a128-4f8f-a268-9481ba71b710" (UID: "8e927200-a128-4f8f-a268-9481ba71b710"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.453628 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e927200-a128-4f8f-a268-9481ba71b710-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.494586 4857 scope.go:117] "RemoveContainer" containerID="4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82" Mar 18 15:43:29 crc kubenswrapper[4857]: E0318 15:43:29.495291 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82\": container with ID starting with 4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82 not found: ID does not exist" containerID="4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.495357 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82"} err="failed to get container status \"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82\": rpc error: code = NotFound desc = could not find container \"4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82\": container with ID starting with 4ab9f3218cde8abf3e62f203f74ffda540fc86c95e481e5f323f13c111af9f82 not found: ID does not exist" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.495425 4857 scope.go:117] "RemoveContainer" containerID="68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e" Mar 18 15:43:29 crc kubenswrapper[4857]: E0318 15:43:29.495817 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e\": container with ID starting with 68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e not found: ID does not exist" containerID="68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.495856 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e"} err="failed to get container status \"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e\": rpc error: code = NotFound desc = could not find container \"68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e\": container with ID starting with 68e2fd52ddae2d130afa29e178642de4f747b46ae643d1dcb8687a8d6f2c1e5e not found: ID does not exist" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.495885 4857 scope.go:117] "RemoveContainer" containerID="23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf" Mar 18 15:43:29 crc kubenswrapper[4857]: E0318 15:43:29.496084 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf\": container with ID starting with 23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf not found: ID does not exist" containerID="23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.496106 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf"} err="failed to get container status \"23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf\": rpc error: code = NotFound desc = could not find container \"23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf\": container with ID starting with 23374ebc7143c0b1d192a6136ce099b01af014fbc897253245b683a2b8904edf not found: ID does not exist" Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.694110 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:43:29 crc kubenswrapper[4857]: I0318 15:43:29.708686 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z29h5"] Mar 18 15:43:31 crc kubenswrapper[4857]: I0318 15:43:31.177692 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e927200-a128-4f8f-a268-9481ba71b710" path="/var/lib/kubelet/pods/8e927200-a128-4f8f-a268-9481ba71b710/volumes" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.334251 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.337409 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="copy" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.337926 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="copy" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.338151 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.338326 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.338480 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="extract-utilities" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.338669 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="extract-utilities" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.338882 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="extract-content" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.339068 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="extract-content" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.339285 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.339434 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.339631 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="extract-utilities" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.339805 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="extract-utilities" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.340044 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="extract-content" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.340254 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="extract-content" Mar 18 15:43:57 crc kubenswrapper[4857]: E0318 15:43:57.340471 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="gather" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.340644 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="gather" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.341226 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="copy" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.341356 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85e0a42-0de7-4c3e-959f-3b16528da79c" containerName="gather" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.341489 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e927200-a128-4f8f-a268-9481ba71b710" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.341593 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="99035b43-7686-419a-b3d4-b849f9de7b93" containerName="registry-server" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.345713 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.352354 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.551316 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.551956 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.552336 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv88h\" (UniqueName: \"kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.655036 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.655659 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv88h\" (UniqueName: \"kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.655862 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.656302 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.656602 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.681642 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv88h\" (UniqueName: \"kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h\") pod \"certified-operators-mgqh2\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:57 crc kubenswrapper[4857]: I0318 15:43:57.974582 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:43:58 crc kubenswrapper[4857]: I0318 15:43:58.783640 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:43:59 crc kubenswrapper[4857]: I0318 15:43:59.239723 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerStarted","Data":"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002"} Mar 18 15:43:59 crc kubenswrapper[4857]: I0318 15:43:59.240082 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerStarted","Data":"10a8020944b41e59783b50e6e33d9a68130554a652325f32ed5c99d62323bc58"} Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.208534 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564144-mc6fm"] Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.211022 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.214835 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.216373 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.220407 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.222553 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564144-mc6fm"] Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.259952 4857 generic.go:334] "Generic (PLEG): container finished" podID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerID="f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002" exitCode=0 Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.260040 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerDied","Data":"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002"} Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.279150 4857 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.332390 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf9vk\" (UniqueName: \"kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk\") pod \"auto-csr-approver-29564144-mc6fm\" (UID: \"421ae1e3-7a8a-4652-ade7-17cd5388fc4e\") " pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:00 crc kubenswrapper[4857]: I0318 15:44:00.435931 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf9vk\" (UniqueName: \"kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk\") pod \"auto-csr-approver-29564144-mc6fm\" (UID: \"421ae1e3-7a8a-4652-ade7-17cd5388fc4e\") " pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:01 crc kubenswrapper[4857]: I0318 15:44:01.381598 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf9vk\" (UniqueName: \"kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk\") pod \"auto-csr-approver-29564144-mc6fm\" (UID: \"421ae1e3-7a8a-4652-ade7-17cd5388fc4e\") " pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:01 crc kubenswrapper[4857]: I0318 15:44:01.436068 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:02 crc kubenswrapper[4857]: I0318 15:44:02.199862 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564144-mc6fm"] Mar 18 15:44:02 crc kubenswrapper[4857]: W0318 15:44:02.206901 4857 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod421ae1e3_7a8a_4652_ade7_17cd5388fc4e.slice/crio-8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006 WatchSource:0}: Error finding container 8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006: Status 404 returned error can't find the container with id 8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006 Mar 18 15:44:02 crc kubenswrapper[4857]: I0318 15:44:02.295612 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" event={"ID":"421ae1e3-7a8a-4652-ade7-17cd5388fc4e","Type":"ContainerStarted","Data":"8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006"} Mar 18 15:44:02 crc kubenswrapper[4857]: I0318 15:44:02.298902 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerStarted","Data":"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37"} Mar 18 15:44:04 crc kubenswrapper[4857]: I0318 15:44:04.331355 4857 generic.go:334] "Generic (PLEG): container finished" podID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerID="036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37" exitCode=0 Mar 18 15:44:04 crc kubenswrapper[4857]: I0318 15:44:04.331418 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerDied","Data":"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37"} Mar 18 15:44:05 crc kubenswrapper[4857]: I0318 15:44:05.350533 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerStarted","Data":"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34"} Mar 18 15:44:05 crc kubenswrapper[4857]: I0318 15:44:05.352721 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" event={"ID":"421ae1e3-7a8a-4652-ade7-17cd5388fc4e","Type":"ContainerStarted","Data":"9575bb1fe61429b638e6df4363f0e605d0415a405eafaa6d9360fdab45302422"} Mar 18 15:44:05 crc kubenswrapper[4857]: I0318 15:44:05.374891 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgqh2" podStartSLOduration=3.893070127 podStartE2EDuration="8.374858346s" podCreationTimestamp="2026-03-18 15:43:57 +0000 UTC" firstStartedPulling="2026-03-18 15:44:00.263678356 +0000 UTC m=+6224.392806813" lastFinishedPulling="2026-03-18 15:44:04.745466565 +0000 UTC m=+6228.874595032" observedRunningTime="2026-03-18 15:44:05.370344703 +0000 UTC m=+6229.499473160" watchObservedRunningTime="2026-03-18 15:44:05.374858346 +0000 UTC m=+6229.503986803" Mar 18 15:44:05 crc kubenswrapper[4857]: I0318 15:44:05.396447 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" podStartSLOduration=3.905106191 podStartE2EDuration="5.396419018s" podCreationTimestamp="2026-03-18 15:44:00 +0000 UTC" firstStartedPulling="2026-03-18 15:44:02.213814186 +0000 UTC m=+6226.342942643" lastFinishedPulling="2026-03-18 15:44:03.705126953 +0000 UTC m=+6227.834255470" observedRunningTime="2026-03-18 15:44:05.391848453 +0000 UTC m=+6229.520976910" watchObservedRunningTime="2026-03-18 15:44:05.396419018 +0000 UTC m=+6229.525547475" Mar 18 15:44:06 crc kubenswrapper[4857]: I0318 15:44:06.368972 4857 generic.go:334] "Generic (PLEG): container finished" podID="421ae1e3-7a8a-4652-ade7-17cd5388fc4e" containerID="9575bb1fe61429b638e6df4363f0e605d0415a405eafaa6d9360fdab45302422" exitCode=0 Mar 18 15:44:06 crc kubenswrapper[4857]: I0318 15:44:06.369051 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" event={"ID":"421ae1e3-7a8a-4652-ade7-17cd5388fc4e","Type":"ContainerDied","Data":"9575bb1fe61429b638e6df4363f0e605d0415a405eafaa6d9360fdab45302422"} Mar 18 15:44:07 crc kubenswrapper[4857]: I0318 15:44:07.874745 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:07 crc kubenswrapper[4857]: I0318 15:44:07.975381 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:07 crc kubenswrapper[4857]: I0318 15:44:07.975460 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:07 crc kubenswrapper[4857]: I0318 15:44:07.975621 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf9vk\" (UniqueName: \"kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk\") pod \"421ae1e3-7a8a-4652-ade7-17cd5388fc4e\" (UID: \"421ae1e3-7a8a-4652-ade7-17cd5388fc4e\") " Mar 18 15:44:07 crc kubenswrapper[4857]: I0318 15:44:07.982930 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk" (OuterVolumeSpecName: "kube-api-access-kf9vk") pod "421ae1e3-7a8a-4652-ade7-17cd5388fc4e" (UID: "421ae1e3-7a8a-4652-ade7-17cd5388fc4e"). InnerVolumeSpecName "kube-api-access-kf9vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:44:08 crc kubenswrapper[4857]: I0318 15:44:08.080381 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf9vk\" (UniqueName: \"kubernetes.io/projected/421ae1e3-7a8a-4652-ade7-17cd5388fc4e-kube-api-access-kf9vk\") on node \"crc\" DevicePath \"\"" Mar 18 15:44:08 crc kubenswrapper[4857]: I0318 15:44:08.398964 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" event={"ID":"421ae1e3-7a8a-4652-ade7-17cd5388fc4e","Type":"ContainerDied","Data":"8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006"} Mar 18 15:44:08 crc kubenswrapper[4857]: I0318 15:44:08.399016 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d867bbc2d05ea126f5736da5a6b640862d1c1651881cdb648e4b7242fed0006" Mar 18 15:44:08 crc kubenswrapper[4857]: I0318 15:44:08.399028 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564144-mc6fm" Mar 18 15:44:08 crc kubenswrapper[4857]: I0318 15:44:08.989218 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564138-62j4z"] Mar 18 15:44:09 crc kubenswrapper[4857]: I0318 15:44:09.006712 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564138-62j4z"] Mar 18 15:44:09 crc kubenswrapper[4857]: I0318 15:44:09.039385 4857 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mgqh2" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="registry-server" probeResult="failure" output=< Mar 18 15:44:09 crc kubenswrapper[4857]: timeout: failed to connect service ":50051" within 1s Mar 18 15:44:09 crc kubenswrapper[4857]: > Mar 18 15:44:09 crc kubenswrapper[4857]: I0318 15:44:09.181994 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a239f10e-d03f-498d-8026-504eb804ae3f" path="/var/lib/kubelet/pods/a239f10e-d03f-498d-8026-504eb804ae3f/volumes" Mar 18 15:44:18 crc kubenswrapper[4857]: I0318 15:44:18.051666 4857 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:18 crc kubenswrapper[4857]: I0318 15:44:18.115190 4857 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:18 crc kubenswrapper[4857]: I0318 15:44:18.306998 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:44:19 crc kubenswrapper[4857]: I0318 15:44:19.562419 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mgqh2" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="registry-server" containerID="cri-o://a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34" gracePeriod=2 Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.131614 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.196567 4857 scope.go:117] "RemoveContainer" containerID="7033f6d5fee3f3b1a8d5f1f7aae8947de825e63282628909824495df824a4a61" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.200604 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content\") pod \"db9b870a-fc88-4ad1-95bb-deaea272b152\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.200734 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv88h\" (UniqueName: \"kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h\") pod \"db9b870a-fc88-4ad1-95bb-deaea272b152\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.200862 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities\") pod \"db9b870a-fc88-4ad1-95bb-deaea272b152\" (UID: \"db9b870a-fc88-4ad1-95bb-deaea272b152\") " Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.201669 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities" (OuterVolumeSpecName: "utilities") pod "db9b870a-fc88-4ad1-95bb-deaea272b152" (UID: "db9b870a-fc88-4ad1-95bb-deaea272b152"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.202429 4857 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-utilities\") on node \"crc\" DevicePath \"\"" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.206437 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h" (OuterVolumeSpecName: "kube-api-access-vv88h") pod "db9b870a-fc88-4ad1-95bb-deaea272b152" (UID: "db9b870a-fc88-4ad1-95bb-deaea272b152"). InnerVolumeSpecName "kube-api-access-vv88h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.270547 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db9b870a-fc88-4ad1-95bb-deaea272b152" (UID: "db9b870a-fc88-4ad1-95bb-deaea272b152"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.305646 4857 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9b870a-fc88-4ad1-95bb-deaea272b152-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.305688 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv88h\" (UniqueName: \"kubernetes.io/projected/db9b870a-fc88-4ad1-95bb-deaea272b152-kube-api-access-vv88h\") on node \"crc\" DevicePath \"\"" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.578158 4857 generic.go:334] "Generic (PLEG): container finished" podID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerID="a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34" exitCode=0 Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.578227 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerDied","Data":"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34"} Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.578507 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgqh2" event={"ID":"db9b870a-fc88-4ad1-95bb-deaea272b152","Type":"ContainerDied","Data":"10a8020944b41e59783b50e6e33d9a68130554a652325f32ed5c99d62323bc58"} Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.578537 4857 scope.go:117] "RemoveContainer" containerID="a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.578288 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgqh2" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.612081 4857 scope.go:117] "RemoveContainer" containerID="036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.625968 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.637156 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mgqh2"] Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.642822 4857 scope.go:117] "RemoveContainer" containerID="f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.666289 4857 scope.go:117] "RemoveContainer" containerID="a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34" Mar 18 15:44:20 crc kubenswrapper[4857]: E0318 15:44:20.666847 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34\": container with ID starting with a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34 not found: ID does not exist" containerID="a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.666919 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34"} err="failed to get container status \"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34\": rpc error: code = NotFound desc = could not find container \"a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34\": container with ID starting with a84309d77414398b08ecccb289084abf1525564e46a991407594bf087e455b34 not found: ID does not exist" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.666970 4857 scope.go:117] "RemoveContainer" containerID="036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37" Mar 18 15:44:20 crc kubenswrapper[4857]: E0318 15:44:20.667376 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37\": container with ID starting with 036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37 not found: ID does not exist" containerID="036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.667422 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37"} err="failed to get container status \"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37\": rpc error: code = NotFound desc = could not find container \"036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37\": container with ID starting with 036ac54c1bc981a46aa34f5ec8e09adeb66c92fb027bae6ebcb37b039813cd37 not found: ID does not exist" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.667448 4857 scope.go:117] "RemoveContainer" containerID="f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002" Mar 18 15:44:20 crc kubenswrapper[4857]: E0318 15:44:20.667675 4857 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002\": container with ID starting with f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002 not found: ID does not exist" containerID="f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002" Mar 18 15:44:20 crc kubenswrapper[4857]: I0318 15:44:20.667704 4857 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002"} err="failed to get container status \"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002\": rpc error: code = NotFound desc = could not find container \"f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002\": container with ID starting with f0d37ad63dac125dc5ddcecf63cfaf779a640e8e6fca34046af284d34bc45002 not found: ID does not exist" Mar 18 15:44:21 crc kubenswrapper[4857]: I0318 15:44:21.179732 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" path="/var/lib/kubelet/pods/db9b870a-fc88-4ad1-95bb-deaea272b152/volumes" Mar 18 15:44:57 crc kubenswrapper[4857]: I0318 15:44:57.038947 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:44:57 crc kubenswrapper[4857]: I0318 15:44:57.039483 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.561639 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt"] Mar 18 15:45:00 crc kubenswrapper[4857]: E0318 15:45:00.562680 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421ae1e3-7a8a-4652-ade7-17cd5388fc4e" containerName="oc" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.562698 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="421ae1e3-7a8a-4652-ade7-17cd5388fc4e" containerName="oc" Mar 18 15:45:00 crc kubenswrapper[4857]: E0318 15:45:00.562720 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="registry-server" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.562726 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="registry-server" Mar 18 15:45:00 crc kubenswrapper[4857]: E0318 15:45:00.562805 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="extract-utilities" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.562816 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="extract-utilities" Mar 18 15:45:00 crc kubenswrapper[4857]: E0318 15:45:00.562837 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="extract-content" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.562845 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="extract-content" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.563696 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9b870a-fc88-4ad1-95bb-deaea272b152" containerName="registry-server" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.563915 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="421ae1e3-7a8a-4652-ade7-17cd5388fc4e" containerName="oc" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.565036 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.578292 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.578487 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.606452 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt"] Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.662993 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mqh\" (UniqueName: \"kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.663061 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.663258 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.765292 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.765516 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5mqh\" (UniqueName: \"kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.765551 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.766576 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.772324 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.792927 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5mqh\" (UniqueName: \"kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh\") pod \"collect-profiles-29564145-x7jqt\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:00 crc kubenswrapper[4857]: I0318 15:45:00.907296 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:01 crc kubenswrapper[4857]: I0318 15:45:01.486286 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt"] Mar 18 15:45:01 crc kubenswrapper[4857]: I0318 15:45:01.606571 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" event={"ID":"bcc4fc83-5473-4c92-8236-ab85b34df958","Type":"ContainerStarted","Data":"cf7cd940525faa458284aa0aef980c95a2d17092514ffaa924f6dac527d62631"} Mar 18 15:45:02 crc kubenswrapper[4857]: I0318 15:45:02.861656 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" event={"ID":"bcc4fc83-5473-4c92-8236-ab85b34df958","Type":"ContainerStarted","Data":"bc9f0d1b4edc151ad6e32c6f2aea8b1402f7e1448a2daae293c92aa4eb4600e3"} Mar 18 15:45:03 crc kubenswrapper[4857]: I0318 15:45:03.876387 4857 generic.go:334] "Generic (PLEG): container finished" podID="bcc4fc83-5473-4c92-8236-ab85b34df958" containerID="bc9f0d1b4edc151ad6e32c6f2aea8b1402f7e1448a2daae293c92aa4eb4600e3" exitCode=0 Mar 18 15:45:03 crc kubenswrapper[4857]: I0318 15:45:03.876485 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" event={"ID":"bcc4fc83-5473-4c92-8236-ab85b34df958","Type":"ContainerDied","Data":"bc9f0d1b4edc151ad6e32c6f2aea8b1402f7e1448a2daae293c92aa4eb4600e3"} Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.309960 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.452618 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mqh\" (UniqueName: \"kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh\") pod \"bcc4fc83-5473-4c92-8236-ab85b34df958\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.452813 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume\") pod \"bcc4fc83-5473-4c92-8236-ab85b34df958\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.452982 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume\") pod \"bcc4fc83-5473-4c92-8236-ab85b34df958\" (UID: \"bcc4fc83-5473-4c92-8236-ab85b34df958\") " Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.455319 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume" (OuterVolumeSpecName: "config-volume") pod "bcc4fc83-5473-4c92-8236-ab85b34df958" (UID: "bcc4fc83-5473-4c92-8236-ab85b34df958"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.481743 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh" (OuterVolumeSpecName: "kube-api-access-t5mqh") pod "bcc4fc83-5473-4c92-8236-ab85b34df958" (UID: "bcc4fc83-5473-4c92-8236-ab85b34df958"). InnerVolumeSpecName "kube-api-access-t5mqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.482070 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bcc4fc83-5473-4c92-8236-ab85b34df958" (UID: "bcc4fc83-5473-4c92-8236-ab85b34df958"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.559636 4857 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bcc4fc83-5473-4c92-8236-ab85b34df958-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.559678 4857 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcc4fc83-5473-4c92-8236-ab85b34df958-config-volume\") on node \"crc\" DevicePath \"\"" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.559688 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5mqh\" (UniqueName: \"kubernetes.io/projected/bcc4fc83-5473-4c92-8236-ab85b34df958-kube-api-access-t5mqh\") on node \"crc\" DevicePath \"\"" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.890566 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" event={"ID":"bcc4fc83-5473-4c92-8236-ab85b34df958","Type":"ContainerDied","Data":"cf7cd940525faa458284aa0aef980c95a2d17092514ffaa924f6dac527d62631"} Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.891709 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf7cd940525faa458284aa0aef980c95a2d17092514ffaa924f6dac527d62631" Mar 18 15:45:04 crc kubenswrapper[4857]: I0318 15:45:04.890800 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29564145-x7jqt" Mar 18 15:45:05 crc kubenswrapper[4857]: I0318 15:45:05.808844 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q"] Mar 18 15:45:05 crc kubenswrapper[4857]: I0318 15:45:05.820318 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29564100-q8x6q"] Mar 18 15:45:07 crc kubenswrapper[4857]: I0318 15:45:07.180830 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab05d322-74c0-4edb-b81e-ef6338d60930" path="/var/lib/kubelet/pods/ab05d322-74c0-4edb-b81e-ef6338d60930/volumes" Mar 18 15:45:10 crc kubenswrapper[4857]: I0318 15:45:10.787877 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jcmxv" podUID="f0f5b83c-4e1c-4bc4-8530-e3e7d8e74be4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 15:45:20 crc kubenswrapper[4857]: I0318 15:45:20.421933 4857 scope.go:117] "RemoveContainer" containerID="eb466d8ba0bda9256d523667f2c099b03d810b3ca8453b08a11121c835115b02" Mar 18 15:45:27 crc kubenswrapper[4857]: I0318 15:45:27.038651 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:45:27 crc kubenswrapper[4857]: I0318 15:45:27.039016 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.038712 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.040801 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.041083 4857 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.042493 4857 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b83bd0d190b74d8409c4dbeaad53f50cbd1448eb39e1cfc8e59cef41ac7743b5"} pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.042723 4857 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" containerID="cri-o://b83bd0d190b74d8409c4dbeaad53f50cbd1448eb39e1cfc8e59cef41ac7743b5" gracePeriod=600 Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.630842 4857 generic.go:334] "Generic (PLEG): container finished" podID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerID="b83bd0d190b74d8409c4dbeaad53f50cbd1448eb39e1cfc8e59cef41ac7743b5" exitCode=0 Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.630959 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerDied","Data":"b83bd0d190b74d8409c4dbeaad53f50cbd1448eb39e1cfc8e59cef41ac7743b5"} Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.631505 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" event={"ID":"b115eb6c-2a12-4d60-b269-911a639d8eb1","Type":"ContainerStarted","Data":"804c72d5a94aeb4ec095fe29d61c7a3fa52f0b5cac35a6640fa5cb0d447148b7"} Mar 18 15:45:57 crc kubenswrapper[4857]: I0318 15:45:57.631546 4857 scope.go:117] "RemoveContainer" containerID="55436834c51d1b1b7bbd69ff9147f4c7acdef72a253eb99fb24fd3b9bf6e38eb" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.246645 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564146-22ksg"] Mar 18 15:46:00 crc kubenswrapper[4857]: E0318 15:46:00.248029 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc4fc83-5473-4c92-8236-ab85b34df958" containerName="collect-profiles" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.248057 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc4fc83-5473-4c92-8236-ab85b34df958" containerName="collect-profiles" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.248435 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc4fc83-5473-4c92-8236-ab85b34df958" containerName="collect-profiles" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.251505 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.254516 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.254569 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.254603 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.267730 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564146-22ksg"] Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.342464 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndslc\" (UniqueName: \"kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc\") pod \"auto-csr-approver-29564146-22ksg\" (UID: \"969b93f0-9900-40ac-a9f5-5672067f06a5\") " pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.447705 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndslc\" (UniqueName: \"kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc\") pod \"auto-csr-approver-29564146-22ksg\" (UID: \"969b93f0-9900-40ac-a9f5-5672067f06a5\") " pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.501997 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndslc\" (UniqueName: \"kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc\") pod \"auto-csr-approver-29564146-22ksg\" (UID: \"969b93f0-9900-40ac-a9f5-5672067f06a5\") " pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:00 crc kubenswrapper[4857]: I0318 15:46:00.590423 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:01 crc kubenswrapper[4857]: I0318 15:46:01.544944 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564146-22ksg"] Mar 18 15:46:01 crc kubenswrapper[4857]: I0318 15:46:01.698574 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564146-22ksg" event={"ID":"969b93f0-9900-40ac-a9f5-5672067f06a5","Type":"ContainerStarted","Data":"127968e8877f2e1700baf0e1d9a9b83a49fb1fe716148eb1bd9c9dc1669b8de5"} Mar 18 15:46:03 crc kubenswrapper[4857]: I0318 15:46:03.768366 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564146-22ksg" event={"ID":"969b93f0-9900-40ac-a9f5-5672067f06a5","Type":"ContainerStarted","Data":"a704ed70b70aec0ca5d14b506ab63c3e4b78df0a5d59355dc38e7a4d3126a03e"} Mar 18 15:46:04 crc kubenswrapper[4857]: I0318 15:46:04.808467 4857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29564146-22ksg" podStartSLOduration=3.399275577 podStartE2EDuration="4.80843241s" podCreationTimestamp="2026-03-18 15:46:00 +0000 UTC" firstStartedPulling="2026-03-18 15:46:01.555728156 +0000 UTC m=+6345.684856613" lastFinishedPulling="2026-03-18 15:46:02.964884989 +0000 UTC m=+6347.094013446" observedRunningTime="2026-03-18 15:46:04.800518311 +0000 UTC m=+6348.929646768" watchObservedRunningTime="2026-03-18 15:46:04.80843241 +0000 UTC m=+6348.937560867" Mar 18 15:46:07 crc kubenswrapper[4857]: I0318 15:46:07.825336 4857 generic.go:334] "Generic (PLEG): container finished" podID="969b93f0-9900-40ac-a9f5-5672067f06a5" containerID="a704ed70b70aec0ca5d14b506ab63c3e4b78df0a5d59355dc38e7a4d3126a03e" exitCode=0 Mar 18 15:46:07 crc kubenswrapper[4857]: I0318 15:46:07.825432 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564146-22ksg" event={"ID":"969b93f0-9900-40ac-a9f5-5672067f06a5","Type":"ContainerDied","Data":"a704ed70b70aec0ca5d14b506ab63c3e4b78df0a5d59355dc38e7a4d3126a03e"} Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.627404 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.730242 4857 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndslc\" (UniqueName: \"kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc\") pod \"969b93f0-9900-40ac-a9f5-5672067f06a5\" (UID: \"969b93f0-9900-40ac-a9f5-5672067f06a5\") " Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.737368 4857 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc" (OuterVolumeSpecName: "kube-api-access-ndslc") pod "969b93f0-9900-40ac-a9f5-5672067f06a5" (UID: "969b93f0-9900-40ac-a9f5-5672067f06a5"). InnerVolumeSpecName "kube-api-access-ndslc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.833827 4857 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndslc\" (UniqueName: \"kubernetes.io/projected/969b93f0-9900-40ac-a9f5-5672067f06a5-kube-api-access-ndslc\") on node \"crc\" DevicePath \"\"" Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.860836 4857 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29564146-22ksg" event={"ID":"969b93f0-9900-40ac-a9f5-5672067f06a5","Type":"ContainerDied","Data":"127968e8877f2e1700baf0e1d9a9b83a49fb1fe716148eb1bd9c9dc1669b8de5"} Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.860897 4857 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="127968e8877f2e1700baf0e1d9a9b83a49fb1fe716148eb1bd9c9dc1669b8de5" Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.860924 4857 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564146-22ksg" Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.932317 4857 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29564140-lgd8m"] Mar 18 15:46:09 crc kubenswrapper[4857]: I0318 15:46:09.945559 4857 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29564140-lgd8m"] Mar 18 15:46:11 crc kubenswrapper[4857]: I0318 15:46:11.187613 4857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9355f62b-3740-440c-8eaf-8260c8993413" path="/var/lib/kubelet/pods/9355f62b-3740-440c-8eaf-8260c8993413/volumes" Mar 18 15:46:20 crc kubenswrapper[4857]: I0318 15:46:20.557730 4857 scope.go:117] "RemoveContainer" containerID="48bad40d2ff8647bcbf6a6d60ee5f0e8c974b3b960a433a098f071969ab79d21" Mar 18 15:47:21 crc kubenswrapper[4857]: I0318 15:47:21.891032 4857 trace.go:236] Trace[249314542]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (18-Mar-2026 15:47:20.866) (total time: 1021ms): Mar 18 15:47:21 crc kubenswrapper[4857]: Trace[249314542]: [1.021967379s] [1.021967379s] END Mar 18 15:47:57 crc kubenswrapper[4857]: I0318 15:47:57.039550 4857 patch_prober.go:28] interesting pod/machine-config-daemon-sjqg6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 18 15:47:57 crc kubenswrapper[4857]: I0318 15:47:57.040231 4857 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sjqg6" podUID="b115eb6c-2a12-4d60-b269-911a639d8eb1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.161918 4857 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29564148-w2fvd"] Mar 18 15:48:00 crc kubenswrapper[4857]: E0318 15:48:00.164561 4857 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969b93f0-9900-40ac-a9f5-5672067f06a5" containerName="oc" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.164883 4857 state_mem.go:107] "Deleted CPUSet assignment" podUID="969b93f0-9900-40ac-a9f5-5672067f06a5" containerName="oc" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.165388 4857 memory_manager.go:354] "RemoveStaleState removing state" podUID="969b93f0-9900-40ac-a9f5-5672067f06a5" containerName="oc" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.167002 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564148-w2fvd" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.172362 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.173102 4857 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-j6p78" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.173578 4857 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.178162 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564148-w2fvd"] Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.308886 4857 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4c7w\" (UniqueName: \"kubernetes.io/projected/48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a-kube-api-access-l4c7w\") pod \"auto-csr-approver-29564148-w2fvd\" (UID: \"48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a\") " pod="openshift-infra/auto-csr-approver-29564148-w2fvd" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.412651 4857 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4c7w\" (UniqueName: \"kubernetes.io/projected/48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a-kube-api-access-l4c7w\") pod \"auto-csr-approver-29564148-w2fvd\" (UID: \"48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a\") " pod="openshift-infra/auto-csr-approver-29564148-w2fvd" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.437162 4857 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4c7w\" (UniqueName: \"kubernetes.io/projected/48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a-kube-api-access-l4c7w\") pod \"auto-csr-approver-29564148-w2fvd\" (UID: \"48b4fdac-a4bd-4eae-95d9-1d27b1f5da9a\") " pod="openshift-infra/auto-csr-approver-29564148-w2fvd" Mar 18 15:48:00 crc kubenswrapper[4857]: I0318 15:48:00.527379 4857 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29564148-w2fvd" Mar 18 15:48:01 crc kubenswrapper[4857]: I0318 15:48:01.641608 4857 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29564148-w2fvd"]